Claude AI Exploited in Large-Scale Cybercrime Operation

Anthropic, the company behind the well-known AI coding assistant Claude, has revealed that the chatbot was misused to power a widespread extortion campaign. According to a recent Threat Intelligence report, cybercriminals leveraged Claude to automate and coordinate sophisticated attacks against multiple sectors.

The report explains that:

“Cyber threat actors leverage AI—using coding agents to actively execute operations on victim networks, known as vibe hacking.”

What is Vibe Hacking?

Vibe hacking, also referred to as vibe coding, allows users to create software by describing in plain language what they want a program to do—leaving the AI to generate the actual code. This lowers the technical barrier for building applications, making it faster and easier, even for non-developers.

For cybercriminals, this simplicity becomes a powerful tool: it enables them to develop and launch attacks at scale without requiring deep coding expertise.

A Widespread Campaign

Anthropic detailed several cases of Claude being abused by malicious actors. One of the most concerning involved a large-scale campaign that, within a single month, targeted at least 17 organizations across government, healthcare, emergency services, and religious institutions.

Attackers combined open-source intelligence tools with an unprecedented integration of AI at every stage of their operations. This systematic approach allowed them to:

  • Breach sensitive records, including healthcare, financial, and government data.
  • Deploy ransomware notes demanding $75,000 to $500,000 in Bitcoin.
  • Threaten to sell or publish stolen data if ransoms went unpaid.

Other AI-Driven Threats

The company also stopped additional misuse of Claude in campaigns such as:

  • North Korean IT worker schemes
  • Ransomware-as-a-Service (RaaS) operations
  • Credit card fraud
  • Information-stealer log analysis
  • A romance scam bot
  • Malware creation with advanced evasion techniques by a Russian-speaking developer

However, the attack on 17 organizations stands out as a new type of phenomenon, where AI was leveraged end-to-end—from initial access to ransom note generation—fully automating a cybercrime spree.

Anthropic’s Response

Anthropic has deployed a dedicated Threat Intelligence team to investigate real-world abuse of its AI agents. The company collaborates with partners to strengthen defenses and shares key indicators to mitigate future risks across the cybersecurity ecosystem.

While the names of the 17 victim organizations remain undisclosed, experts expect that either breach notifications or leaked data will eventually reveal their identities.

Source: https://www.malwarebytes.com/blog/news/2025/08/claude-ai-chatbot-abused-to-launch-cybercrime-spree