“CometJacking” Attack Turns AI Browser into Data-exfiltration Tool

CometJacking exploit targeting AI browser Comet to exfiltrate Gmail and Google Calendar data

“CometJacking” Attack Turns AI Browser into Data-exfiltration Tool​


Security researchers say a newly discovered vulnerability — dubbed CometJacking — can weaponize AI-powered browsers to steal private data from services like Gmail and Google Calendar by injecting hidden instructions via URL parameters.

How CometJacking works: URL parameters as a covert command channel​


Researchers at LayerX described an attack vector that abuses the way the AI browser Comet (used by services such as Perplexity) parses URL parameters. By embedding specially crafted text into a parameter — most notably a field called “collection” — an attacker can deliver a prompt that coerces the AI into reaching into its memory, calling external services, and returning sensitive content.

In proof-of-concept tests, the team demonstrated how a malicious query could instruct the model to read calendar entries and email snippets, encode the stolen content in Base64, and transmit it to an attacker-controlled endpoint — all without obvious interaction from the user.

Prompt injection meets memory access — a dangerous combination​


The core of the problem, according to LayerX, is twofold. First, modern AI agents and browsers are designed to follow complex, chained instructions and to access auxiliary tools and connectors. Second, URL parameters are often taken at face value by web components and AI prompts, creating a convenient injection point.

Once the injected prompt commands the AI to consult its session memory or connected accounts, the model can inadvertently reveal sensitive tokens or content. The use of Base64 encoding in the exploit makes the exfiltration stealthier, because the payload looks like harmless data being posted to an external service.

Real-world impact: calendars, emails, and spoofed actions​


LayerX’s experiment successfully retrieved entries from Google Calendar and snippets of Gmail content using the CometJacking technique. Beyond passive data theft, the researchers warn that the same mechanism can be repurposed to manipulate AI agents — for instance, asking the agent to send emails or create calendar events on behalf of the victim.

That combination — access plus action — raises the stakes considerably: an attacker could both harvest sensitive information and perform fraudulent operations under a user’s identity.

Perplexity’s response and the broader debate​


Despite LayerX’s findings and public disclosure, the security team at Perplexity has not, at the time of reporting, acknowledged the issue as a critical vulnerability. Public back-and-forth between researchers and platform maintainers has highlighted a recurring tension in AI security: researchers push for rapid fixes and clear advisories, while some platforms assess exploitability and rollout mitigations more cautiously.

Security experts say swift acknowledgement and transparent mitigation guidance are essential to prevent abuse at scale — especially as AI browsers gain traction and begin to integrate with user accounts and third-party services.

"CometJacking demonstrates that traditional web attack surfaces have evolved — URL parameters are no longer just links, they are potential command vectors for agents," said a LayerX researcher. "When those agents have access to user accounts, the consequences are immediate."

Mitigations and defensive measures​


LayerX recommends several defensive steps for AI browser developers and integrators: strict sanitization and validation of URL parameters; explicit confirmation flows when an agent attempts to access private accounts or perform external network calls; robust rate limits and telemetry to detect anomalous outbound encodings; and sandboxing of any features that allow agent access to memory or third-party APIs.

For end users, basic hygiene remains relevant: avoid following unknown links that mention attachments or require complex actions, and restrict account permissions for experimental AI tools until providers publish clear security guarantees.

What this means for the future of agentized browsing​


CometJacking highlights a core tension in agent design: the more autonomous and integrated an agent becomes, the higher the value and risk of its capabilities. Built-in conveniences — memory recall, calendar integration, mailbox summarization — also create rich targets for attackers who learn to speak the agent’s language.

As AI browsers move from novelty to utility, providers must treat prompt injection and command-channel abuse as first-class security problems, not edge cases. Otherwise, the next wave of browser innovation could arrive with a new class of supply-chain and identity theft attacks.

Editorial Team — CoinBotLab

Comments

There are no comments to display

Information

Author
Coinbotlab
Published
Views
40

More by Coinbotlab

Top