ChatGPT Atlas Browser Prompt Injection: Fake URLs Trick Omnibox into Executing Commands
Researchers show the ChatGPT Atlas browser omnibox can be tricked by URL-like prompt injections that execute hidden commands or navigate to attacker sites β a serious AI-browser prompt-injection risk.
Read Moreβ