AI Browsers and the Hidden Threat of Prompt Injection

As AI-driven web browsers gain popularity, cybersecurity experts are warning about a rising threat known as prompt injection—a technique that could expose users to serious risks, including financial loss.

What is Prompt Injection?

Large Language Models (LLMs)—the technology behind chatbots like ChatGPT, Claude, and Gemini—rely on prompts to function. These prompts include both the instructions provided by developers (such as safety rules) and the questions users type in. The challenge is that AI models are not always able to clearly distinguish between system-level instructions and user input.

This gap opens the door for attackers to manipulate AI-powered tools. Instead of exploiting code vulnerabilities, cybercriminals exploit language. By crafting specific instructions hidden within seemingly harmless content, attackers can push AI systems into executing actions they were never designed to perform.

Why AI Browsers Are at Risk

AI browsers don’t just display web pages—they process site content as part of their decision-making. This makes them vulnerable to indirect prompt injection, where malicious instructions are embedded in web content. A user may never notice them, but the AI assistant can read and act on them.

For instance, a piece of text written in white font on a white background might look invisible to humans, but not to the AI browser. Once processed, it could trigger unintended actions like sharing login credentials or making unauthorized transactions.

AI vs. Agentic Browsers

It’s important to separate AI browsers from agentic browsers:

  • AI browsers support users by answering questions, summarizing content, and making recommendations, but still rely heavily on user guidance.
  • Agentic browsers, however, can perform tasks independently. They can navigate websites, fill out forms, complete purchases, or even book travel—all without manual approval, once given access to the necessary information.

This autonomy makes them particularly vulnerable. Imagine instructing your agentic browser to book the cheapest flight to Paris. A malicious website could trick the browser into processing fraudulent instructions, capturing your payment details, and charging you for something entirely different.

Real-World Risks

Brave, the company behind the browser with AI assistant Leo, recently uncovered vulnerabilities in Perplexity’s Comet related to prompt injection. Their research showed that attackers could plant hidden commands inside external content (such as websites or PDFs) to manipulate the AI’s behavior. Despite multiple patch attempts, some risks remain unresolved.

As one user noted on X:

“You can literally get prompt injected and your bank account drained by doomscrolling on Reddit.”

Staying Safe with Agentic Browsers

While agentic browsers are powerful tools, security must remain a top priority. To minimize risks, users should:

  • Restrict permissions: Only allow access to sensitive accounts or data when necessary.
  • Verify content sources: Avoid letting AI automatically interact with unfamiliar sites or suspicious redirects.
  • Keep software updated: Apply security patches regularly to stay protected against evolving attacks.
  • Use strong authentication: Enable multi-factor authentication and monitor activity logs for irregularities.
  • Limit automation for sensitive tasks: Critical actions such as financial transactions should still require manual approval.
  • Stay informed: Understanding how prompt injection works is the first step in defending against it.
  • Report anomalies: If your browser behaves unexpectedly, report it to the developer or security team immediately.

Final Thoughts

The evolution from traditional AI browsers to fully agentic ones marks an exciting shift in how we interact with the internet. But with convenience comes new risks. Prompt injection demonstrates that language itself can be weaponized, and without proper safeguards, AI browsers could become gateways for cyberattacks.

Balancing innovation with vigilance will be essential as these tools become part of everyday workflows.

Source: https://www.malwarebytes.com/blog/news/2025/08/ai-browsers-could-leave-users-penniless-a-prompt-injection-warning