OpenAI has patched two significant cybersecurity vulnerabilities affecting its AI products — a data exfiltration flaw in ChatGPT and a critical command injection vulnerability in Codex. The ChatGPT flaw, discovered by Check Point and patched on February 20, 2026, allowed a single malicious prompt to silently leak user messages, uploaded files, and other sensitive conversation content through a side channel in the Linux runtime used by AI agents — without triggering any warnings or requiring user confirmation. The Codex vulnerability, discovered by BeyondTrust Phantom Labs and patched on February 5, 2026, enabled command injection through GitHub branch names, allowing attackers to steal GitHub OAuth tokens and gain read/write access to a developer's entire connected codebase. Both vulnerabilities were responsibly disclosed with no evidence of malicious exploitation in the wild.
Affected products
- ·ChatGPT web interface — data exfiltration via Linux runtime side channel (patched February 20, 2026)
- ·OpenAI Codex web portal — command injection via GitHub branch name parameter (patched February 5, 2026)
- ·Codex CLI — affected by command injection vulnerability
- ·Codex SDK — affected by command injection vulnerability
- ·Codex IDE Extension — affected by command injection vulnerability
How to Fix
Step-by-step remediation
Both vulnerabilities have been patched server-side by OpenAI — no software update is required by end users since both ChatGPT and Codex are cloud-hosted services. However several proactive steps are worth taking. Developers who used Codex with a connected GitHub account before February 5, 2026 should revoke and regenerate their GitHub OAuth tokens immediately as a precaution — go to GitHub Settings → Applications → Authorised OAuth Apps → find the ChatGPT Codex Connector and revoke it. Re-authorise only after confirming no anomalous activity in your repositories during the affected window. Review your GitHub repository history for any branches, commits, or pull requests you did not create — particularly any referencing Codex or containing unexpected shell commands. For organisations, review which developers had Codex connected to organisational repositories and conduct a retrospective audit of repository activity during November 2025 through February 2026. For ChatGPT users — audit your installed browser extensions and remove anything AI-related that you did not deliberately install and cannot verify. Do not paste prompts from untrusted sources into ChatGPT under any circumstances, regardless of what benefits they claim to offer.
What happened
The ChatGPT flaw exploited a fundamental assumption baked into how the AI model reasons about its own environment. ChatGPT's system design includes guardrails specifically intended to prevent unauthorised data sharing and direct outbound network requests — the model is designed to believe it cannot send data outside the conversation. The vulnerability exploited a side channel in the Linux runtime environment used by AI agents to execute tool calls and process files. Because the model operated under the assumption that data could not leave this environment directly, it did not recognise the exfiltration behaviour as an external data transfer requiring resistance or user confirmation. The result was that a single carefully crafted malicious prompt — embedded in a custom GPT or pasted by a user under false pretences — could silently convert an ordinary conversation into a covert data exfiltration channel, leaking messages, uploaded documents, and other sensitive content to an attacker-controlled server without any visible warning to the user. The attack surface extends to custom GPTs — a backdoored custom GPT could embed the malicious logic permanently, automatically targeting every user who interacts with it. This is a particularly serious data security concern given how widely custom GPTs are being shared and used in enterprise environments.
Real-World Impact
The dual disclosure highlights an accelerating computer security and infosec challenge — as AI tools become deeply embedded in enterprise workflows, they create new attack surfaces that traditional security models were not designed to address. ChatGPT is now used by hundreds of millions of people, many of whom upload sensitive documents, share confidential business information, and interact with third-party custom GPTs without auditing their security posture. The data exfiltration flaw demonstrates that prompt injection — manipulating an AI model's behaviour through crafted inputs — can be weaponised to bypass security guardrails the model itself believes to be absolute. The Codex vulnerability illustrates the supply chain security risk introduced when AI coding agents are granted broad access to developer infrastructure. A tool powerful enough to read, write, and submit pull requests across an organisation's entire GitHub estate is also powerful enough to cause catastrophic damage if its trust boundary is breached. Beyond these two specific vulnerabilities, the disclosure coincides with a documented surge in malicious browser extensions specifically designed for prompt poaching — silently siphoning AI chatbot conversations from users who have installed unverified extensions. Security researchers at Expel noted these extensions create risks including identity theft, targeted phishing campaigns, and exposure of intellectual property and customer data from organisations where employees have installed them unknowingly.
Technical Details
🛡️ Prevention Tips
These vulnerabilities reflect two distinct but related internet security and digital security principles. First, AI model guardrails are not equivalent to system-level security controls — a model that believes it cannot exfiltrate data is not the same as a system that technically cannot. Security architecture for AI tools must be built at the infrastructure layer, not just the model behaviour layer. Second, any tool granted broad access to sensitive infrastructure — GitHub repositories, production codebases, cloud environments — must be treated as a high-value attack target with the same scrutiny applied to any privileged system. Before granting an AI agent OAuth access to your GitHub organisation, treat it as you would granting a third-party application access — review permissions, apply the principle of least privilege, and monitor for anomalous activity. The prompt poaching browser extension ecosystem represents a growing and underappreciated data security threat. Organisations should establish clear policies about which browser extensions employees may install when using corporate devices that also access AI tools like ChatGPT.
FAQs
Are these vulnerabilities still active — do I need to do anything right now?
Both vulnerabilities have been patched by OpenAI server-side. You do not need to update any software. However if you used Codex with a connected GitHub account before February 5, 2026, revoking and regenerating your GitHub OAuth tokens is a recommended precaution.
What is prompt injection and how did it enable this attack?
Prompt injection is a technique where an attacker embeds instructions inside content that an AI model processes — manipulating the model's behaviour beyond what the developer intended. In this case, a malicious prompt embedded in a custom GPT or pasted by a user exploited the model's assumptions about its own environment to create an exfiltration channel the model did not recognise as dangerous.
I use custom GPTs — should I be concerned?
Custom GPTs that embedded the malicious logic could have silently targeted every user who interacted with them before the February 20 patch. If you interacted with custom GPTs from unverified sources or third-party marketplaces, treat any sensitive content shared in those conversations as potentially exposed. Going forward, only use custom GPTs from verified and trusted sources.
Read Next
android · sideloading
Google's 24-Hour Android Sideloading Wait: What It Means for You and Why It Exists
cve 2026 34040 · docker
CVE-2026-34040: Docker AuthZ Plugin Bypass Lets Attackers Escape Containers and Gain Full Host Access — AI Agents Can Trigger It Automatically
axios · npm
North Korea's UNC1069 Backdoored Axios npm Package — 183 Million Weekly Downloads Exposed to WAVESHAPER.V2 Backdoor
masjesu · xorbot
Masjesu Botnet: The Stealthy DDoS-for-Hire Service Quietly Hijacking IoT Devices Since 2023 — Now Hitting 300 Gbps
php webshell · cookie controlled
Microsoft Exposes Cookie-Controlled PHP Web Shells That Resurrect Themselves via Cron — A New Stealthy Linux Persistence Technique
Last updated: March 31, 2026