Criminals are pirating atmospheres with AI to make large -scale ransoms: anthropic

Despite “sophisticated” railings, the anthropic IA infrastructure company says that cybercriminals always find ways to misuse its Chatbot AI Claude to carry out large-scale cyber attacks.

In an “information on threats” reported on Wednesday, members of the anthropic threat intelligence team, notably Alex Moix, Ken Lebedev and Jacob Klein, shared several cases where the criminals had abused Chatbot Claude, with some attacks that require more than $ 500,000 in ransom.

They found that the chatbot was used not only to provide technical advice to criminals, but also to directly execute hacks on their behalf through “atmosphere hacking”, allowing them to carry out attacks with only a basic knowledge of coding and encryption.

In February, the Safety Society of Blockchain Chainalysis planned cryptographic scams could have its greatest year in 2025, because the generative made it more evolving and affordable for attacks.

Anthropic found a pirate that had been “atmosphere” with Claude to steal sensitive data from at least 17 organizations – including health care, emergency services, government and religious institutions – with ransom requirements ranging from $ 75,000 to $ 500,000 in bitcoin.

A simulated ransom note shows how cybercriminals exploit Claude to threaten. Source: Anthropic

The pirate trained Claude to assess stolen financial files, calculate the appropriate ransom amounts and write personalized ransom notes to maximize psychological pressure.

While Anthropic later prohibited the attacker, the incident reflects how AI facilitates the most basic coders to make cybercrimes with an “unprecedented degree”.

“The actors who cannot independently implement basic encryption or understand the mechanics of the systems now successfully create ransomware with the escape capacities (and) the implementation of anti-analysis techniques.”

North Korean IT workers have also used the Anthropic Claude

Anthropic also noted that North Korean IT workers used Claude to forge convincing identities, pass technical coding tests and even obtain remote roles in American technological companies in Fortune 500. They also used Claude to prepare the responses of interviews for these roles.

Claude was also used to conduct technical work once hired, said Anthropic, noting that employment programs have been designed to channel profits to the North Korean regime despite international sanctions.

VECTION OF CLAUDE propulsion tasks used. Source: Anthropic

Earlier this month, a North Korean IT worker was counteract, where it was found that a team of six shared at least 31 false identities, obtaining everything, government identifiers and telephone numbers to buy Linkedin and Upwork accounts to hide their real identities and its terrestrial cryptography jobs.

In relation: The founder of Telegram, Pavel Durov

One of the workers would have interviewed for a full engineering position at Polygon Labs, while other evidence showed answers to scripted interviews in which they claimed to have experience at NFT Marketplace OpenSsea and Blockchain Oracle Provider Chainlink.

Anthropic said that his new report was to publicly discuss misuse incidents to help the wider IA security and security community and to strengthen the wider defense of AI attackers.

He said that despite the implementation of “sophisticated security and security measures” to prevent the abusive use of Claude, the malicious actors continued to find ways to get around them.

Review: 3 people who have become unexpectedly