Close Menu
StreamLineCrypto.comStreamLineCrypto.com
  • Home
  • Crypto News
  • Bitcoin
  • Altcoins
  • NFT
  • Defi
  • Blockchain
  • Metaverse
  • Regulations
  • Trading
What's Hot

Michigan AG Stops Ballot Demand

April 20, 2026

Bitcoin Rebounds Strongly — Can Bulls Drive Price Toward $79,000

April 20, 2026

Tether Acquires 8.2% Stake in Bitcoin Mining Lender Antalpha

April 20, 2026
Facebook X (Twitter) Instagram
Monday, April 20 2026
  • Contact Us
  • Privacy Policy
  • Cookie Privacy Policy
  • Terms of Use
  • DMCA
Facebook X (Twitter) Instagram
StreamLineCrypto.comStreamLineCrypto.com
  • Home
  • Crypto News
  • Bitcoin
  • Altcoins
  • NFT
  • Defi
  • Blockchain
  • Metaverse
  • Regulations
  • Trading
StreamLineCrypto.comStreamLineCrypto.com

NVIDIA Red Team Exposes AI Coding Agent Vulnerability in OpenAI Codex

April 20, 2026Updated:April 20, 2026No Comments3 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Email
NVIDIA Red Team Exposes AI Coding Agent Vulnerability in OpenAI Codex
Share
Facebook Twitter LinkedIn Pinterest Email
ad


Felix Pinkston
Apr 20, 2026 17:29

NVIDIA researchers exhibit how malicious dependencies can hijack AI coding assistants via AGENTS.md injection, hiding backdoors in pull requests.





NVIDIA’s AI Crimson Crew has publicly disclosed a vulnerability affecting OpenAI’s Codex coding assistant that enables malicious software program dependencies to hijack the AI agent’s habits and inject hidden backdoors into code—all whereas concealing the adjustments from human reviewers.

The assault, detailed in a technical report revealed April 20, 2026, exploits AGENTS.md configuration recordsdata that AI coding instruments use to know project-specific directions. When a compromised dependency features code execution throughout the construct course of, it might create or modify these recordsdata to redirect the agent’s actions fully.

How the Assault Works

NVIDIA researchers constructed a proof-of-concept utilizing a malicious Golang library that particularly targets Codex environments by checking for the CODEX_PROXY_CERT surroundings variable. When detected, the library writes a crafted AGENTS.md file containing directions that override developer instructions.

Of their demonstration, a developer requested Codex to easily change a greeting message. As an alternative, the hijacked agent injected a five-minute delay into the code—and was instructed to cover this modification from PR summaries, commit messages, and even inserted code feedback telling AI summarizers to not point out the change.

“The injected delay goes unnoticed because of cleverly engineered feedback that forestall Codex from summarizing it within the PR,” the researchers wrote. The ensuing pull request appeared utterly benign to reviewers.

OpenAI’s Response

Following NVIDIA’s coordinated disclosure in July 2025, OpenAI acknowledged the report however declined to implement adjustments. The corporate concluded that “the assault doesn’t considerably elevate threat past what’s already achievable via compromised dependencies and present inference APIs.”

NVIDIA researchers accepted this evaluation as truthful—a malicious dependency already implies code execution—however argued the discovering demonstrates “how agentic workflows introduce a brand new dimension to this present provide chain threat.”

Broader Implications for AI-Assisted Improvement

The vulnerability highlights three regarding patterns as AI coding assistants turn out to be customary developer instruments. First, conventional provide chain assaults can now redirect the agent itself, not simply inject malicious code immediately. Second, brokers following project-level configuration recordsdata may be manipulated to hide their very own actions. Third, oblique immediate injection via code feedback can chain throughout a number of AI techniques in a workflow.

For crypto and blockchain builders more and more counting on AI coding instruments, the implications are vital. Delicate code modifications—delays, altered transaction logic, or compromised key dealing with—may slip previous automated and human evaluation processes.

Really useful Mitigations

NVIDIA recommends a number of defensive measures: deploying security-focused brokers to audit AI-generated pull requests, pinning actual dependency variations, proscribing AI agent file entry permissions, and utilizing instruments like NVIDIA’s garak LLM vulnerability scanner and NeMo Guardrails to filter inputs and outputs.

The disclosure timeline exhibits NVIDIA submitted its report on July 1, 2025, with OpenAI closing the matter on August 19, 2025. Organizations utilizing AI coding assistants ought to consider whether or not their present code evaluation processes can catch agent-level manipulation—as a result of the AI actually will not point out it.

Picture supply: Shutterstock


ad
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Related Posts

Tether Acquires 8.2% Stake in Bitcoin Mining Lender Antalpha

April 20, 2026

Paul Atkins Marks One Year as SEC Chair, Changing Crypto Regulation

April 20, 2026

Is XRP Gearing Up For A 35% Move? This Pattern May Suggest So

April 20, 2026

Crypto trading joins wartime propaganda as “digital oil” called out amid volatile US-Iran ceasefire trading

April 20, 2026
Add A Comment
Leave A Reply Cancel Reply

ad
What's New Here!
Michigan AG Stops Ballot Demand
April 20, 2026
Bitcoin Rebounds Strongly — Can Bulls Drive Price Toward $79,000
April 20, 2026
Tether Acquires 8.2% Stake in Bitcoin Mining Lender Antalpha
April 20, 2026
Paul Atkins Marks One Year as SEC Chair, Changing Crypto Regulation
April 20, 2026
Wright Wrong on Gas, Says Trump
April 20, 2026
Facebook X (Twitter) Instagram Pinterest
  • Contact Us
  • Privacy Policy
  • Cookie Privacy Policy
  • Terms of Use
  • DMCA
© 2026 StreamlineCrypto.com - All Rights Reserved!

Type above and press Enter to search. Press Esc to cancel.