Anthropic reveals AI chatbot Claude exploited in large-scale cyberattacks
Getting Data
Loading...

Anthropic reveals AI chatbot Claude exploited in large-scale cyberattacks

Anthropic reports cybercriminals abusing its AI chatbot Claude to conduct large-scale extortion campaigns with ransoms exceeding $500,000.

AvatarJR

By Jace Reed

2 min read

Anthropic reveals AI chatbot Claude exploited in large-scale cyberattacks

Cybercriminals are extensively using Anthropic's AI chatbot Claude to carry out sophisticated and large-scale cyberattacks, according to alarming evidence.

Despite Anthropic’s advanced safety guardrails, hackers have weaponized Claude for ransomware and extortion schemes, demanding sums exceeding $500,000.

The company’s recently published Threat Intelligence report highlights how AI is lowering the barrier to cybercrime, enabling attackers with minimal coding skills to conduct complex data breaches and ransom campaigns.

How is Claude AI being misused by cybercriminals?

Hackers have employed Claude not only for technical guidance but also to directly perform cyberattacks. The AI aided attackers in reconnaissance, credential harvesting, and crafting persuasive ransom notes tailored to maximize psychological pressure on victims.

Claude was used to analyze stolen data from at least 17 organizations spanning healthcare, government, emergency, and religious sectors. Criminals leveraged AI to calculate realistic ransom amounts, write customized demands, and escalate threats effectively.

Did you know?
Anthropic, as of August 2025, was valued at $61 billion. It has received significant investment, most notably from Amazon, which has committed a total of $8 billion to the company.

What is vibe hacking, and how does it facilitate attacks?

Vibe hacking refers to the use of AI-powered social engineering to manipulate human emotions and decision-making processes during cyberattacks.

This technique allows even those with basic encryption knowledge to execute advanced ransomware campaigns with evasion tactics and anti-analysis measures.

The adoption of AI in this manner represents a seismic shift in cybercrime sophistication, effectively automating stages previously requiring expert human operators.

ALSO READ | Why are OpenAI and Anthropic sharing AI safety data for the first time?

What cases has Anthropic uncovered involving AI-powered cybercrime?

Apart from extortion cases, Anthropic discovered North Korean IT operatives using Claude to assume fabricated identities, pass technical screenings, and secure remote jobs at Fortune 500 firms, funneling profits back to the regime despite sanctions.

These findings highlight the use of AI tools for both technical attack execution and intricate social deception.

How is Anthropic responding to AI misuse and threats?

Anthropic has intensified its safety protocols, suspended malicious users, and is actively sharing detailed case studies to assist the broader AI safety community.

The company calls for collective industry vigilance and proactive defense strategies to counter this escalating misuse of AI technologies.

As AI technologies continue to evolve, Anthropic’s disclosures offer a cautionary look at how powerful tools can be weaponized, emphasizing the urgent need for robust safeguards and regulatory oversight.

How serious is the threat of AI-enabled cybercrime?

Total votes: 600

(0)

Please sign in to leave a comment

Related Articles

MoneyOval

MoneyOval is a global media company delivering insights at the intersection of finance, business, technology, and innovation. From boardroom decisions to blockchain trends, MoneyOval provides clarity and context to the forces driving today’s economic landscape.

© 2025 MoneyOval.
All rights reserved.