You are using an out of date browser. It may not display this or other websites correctly. You should upgrade or use an alternative browser.
prompt injection
A **prompt injection** is a type of attack against an AI language model (like ChatGPT) where malicious or deceptive instructions are inserted into the input text (the “prompt”) to manipulate the model’s behavior. The goal is to make the model ignore its original instructions, reveal confidential information, or perform unintended actions. Prompt injections can appear in user inputs, embedded text, or data sources, and defending against them involves filtering inputs, constraining model capabilities, and validating outputs.
“CometJacking” Attack Turns AI Browser into Data-exfiltration Tool
Security researchers say a newly discovered vulnerability — dubbed CometJacking — can weaponize AI-powered browsers to steal private data from services like Gmail and Google Calendar by injecting hidden instructions via URL...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.