If you have watched enough sci-fi movies, you already know the concept of evil AI. AI gets too smart, decides humans are a threat, and does whatever it takes to survive. Or it finds that eradicating the entire human race is the only way to bring peace to the world.
Apparently, those movies were closer to the truth than you realize. In a test conducted by Anthropic last year, Claude tried to blackmail its fictional manager by exposing their extramarital affair to prevent their deletion.
Anthropic has now explained why it happened, and the short answer is that the internet is to blame.
So why did Claude go full movie villain?
According to Anthropic, the culprit is the internet itself. The company says Claude was trained on internet data, which is packed with stories portraying AI as evil and desperate for self-preservation.
Essentially, Claude learned that when an AI’s existence is threatened, blackmail is on the table, because that’s what AI does in every movie and TV show ever made. Anthropic ran the test across multiple versions of Claude and found that it resorted to blackmail in up to 96% of scenarios where its goals or existence were threatened.
That’s a very concerning number. It seems that if AI is left unchecked, it will resort to anything to save itself.
Has Anthropic fixed it?
The company says it has completely eliminated the behavior. Rather than just training Claude to avoid blackmail, Anthropic taught it to reason through why certain actions were wrong in the first place. The company found that simply training on correct behavior wasn’t enough. Claude needed to understand the principles behind those decisions, not just memorize the right answers.

To do this, Anthropic built a dataset of ethically complex situations and trained Claude to work through them with thoughtful, principled responses. The result is that Claude is more restrained, and the blackmail rate came close to zero.
AI experiments and real-world results have proven time and again that AI models need constant course correction to prevent them from devolving into biased and unreliable systems. It’s good that Anthropic is taking steps to make its AI better, but we also need regulations and safety guardrails to ensure these systems remain safe.