The Pentagon Is Punishing Anthropic for Refusing to Build a Killing Machine on Demand

Big Tech

The Pentagon Is Punishing Anthropic for Refusing to Build a Killing Machine on Demand

Kasun Illankoon

By: Kasun Illankoon

7 min read

Washington handed seven AI giants access to classified military networks. The one company that dared to say 'not without safeguards' got blacklisted, sued, and replaced. This is what happens when ethics meets the defence budget.

by Kasun Illankoon, Editor in Chief at Tech Revolt

[For more news, click here]

When the United States Department of Defense announced on Friday that it had signed classified-network AI agreements with seven of the world's most powerful technology companies, the list was notable as much for who was missing as for who made the cut. OpenAI, Google, Microsoft, Amazon Web Services, Nvidia, SpaceX and a smaller firm called Reflection were all included. Anthropic, the San Francisco AI safety company whose Claude model had until recently been the only artificial intelligence tool operating inside the Pentagon's classified networks, was not.

This was not an oversight. It was a decision. And understanding why that decision was made tells you something important about where American AI policy is heading, who gets to shape it, and what price principled disagreement now carries in Washington.

The Pentagon declared Anthropic a 'supply chain risk.' That label has historically been reserved for companies linked to foreign adversaries.

How Anthropic Lost the Room

The short version goes like this: Anthropic refused to let the US military use its AI for what the Pentagon calls 'all lawful purposes.' That phrase is doing a great deal of work. In practice, it means autonomous weapons systems, mass surveillance, and lethal targeting decisions made at machine speed. Anthropic said no. It insisted that any government contract include safety guardrails specifically designed to prevent Claude from being used in those contexts. The Trump administration said that was unacceptable.

What followed was a remarkable escalation. The Pentagon declared Anthropic a 'supply chain risk,' a designation with serious teeth. Historically, that label has been applied almost exclusively to companies with alleged ties to foreign adversaries, most notably Chinese technology firms. Using it against a domestic AI company, founded by former OpenAI researchers with deep roots in the American technology establishment, was, to put it gently, unusual.

Anthropic sued. A federal judge in California blocked the Pentagon's move last month, ruling in Anthropic's favour and preventing the 'supply chain risk' designation from taking effect. But the damage, commercially at least, was already done. The Friday announcement made clear that the department had moved on, signing its seven new partners into classified AI infrastructure while Anthropic fought its legal battles in court.

The Pentagon's GenAI.mil platform has now been used by 1.3 million Department of Defense personnel, a figure the DoD cited as evidence of its accelerating AI integration.

A Very Lucrative Club Anthropic Is Not In

The financial stakes here are not trivial. Last year's One Big Beautiful Bill Act earmarked a substantial sum of federal money for the Pentagon to spend on AI and offensive cyber operations. That money is now being distributed. Anthropic's seven new competitors are positioned to receive contracts worth potentially billions of dollars. Anthropic is not.

The companies involved are not modest players. OpenAI already has a separate Pentagon contract signed in February 2026. Google and Microsoft have long-standing defence relationships. Amazon Web Services runs significant classified government cloud infrastructure. Nvidia supplies the chips that power virtually all serious AI computation. SpaceX, Elon Musk's rocket company, is an increasingly prominent government contractor. Signing all of them in a single announcement, explicitly in the context of building what the Pentagon called an 'AI-first fighting force,' is a statement of intent.

'The new agreements will transform the military as an AI-first fighting force and will strengthen our warfighters' ability to maintain decision superiority across all domains of warfare,' the Pentagon said in its official release. That language is not incidental. It tells you exactly what the Department of Defense wants AI for: not administrative efficiency, not logistics optimisation, but warfighting advantage. Anthropic's objections were aimed precisely at this use case.

'These agreements will strengthen our warfighters' ability to maintain decision superiority across all domains of warfare.' The Pentagon's own language explains why Anthropic objected.

The Safety Argument and Why It Made Enemies

To understand why Anthropic's position drew such hostility, you have to understand what AI safety actually means in this context. The company was not arguing that Claude should never be used by the military. It was arguing that autonomous lethal decision-making, systems that identify and engage targets without meaningful human oversight, presented risks serious enough to require contractual restrictions.

This is not a fringe position. Researchers across the AI safety field, several allied governments, and a notable portion of the international arms control community have raised similar concerns. The argument is that AI systems, however capable, can fail in unpredictable ways under adversarial conditions, and that delegating life-and-death decisions to such systems creates accountability gaps that existing military law and international humanitarian law struggle to address.

But that argument, reasonable on its merits, landed in a particular political moment. The Trump administration came in with a philosophy of AI deregulation, viewing safety frameworks as obstacles to American competitiveness rather than as legitimate risk management. In that context, Anthropic's insistence on guardrails was read not as responsible engineering but as commercial obstruction dressed up in ethical language.

The administration had leverage, and it used it. By signing Anthropic's competitors into classified networks, it demonstrated that Anthropic was replaceable. By labelling it a supply chain risk, it threatened to cut it off from government business entirely. The message was clear: play by our rules or lose the market.

Why the White House Has Quietly Started Talking Again

Here is where the story gets more complicated. Despite the legal battle, despite the blacklist, despite the Friday announcement that conspicuously omitted Anthropic's name, the White House has reportedly reopened discussions with the company in recent weeks. The reason cited is that Anthropic has made significant announcements about several technology breakthroughs, developments apparently impressive enough to make the administration reconsider its approach.

This matters. It suggests that the Pentagon's posture is less about principled objection to Anthropic specifically, and more about leverage. The administration froze Anthropic out to create pressure. Now that pressure is being used as a negotiating tool. The implicit offer appears to be: come to terms, drop the guardrail requirements, and the contracts reopen.

Whether Anthropic will bend is the central question. The company's founders, including Dario and Daniela Amodei, have staked the company's identity on the idea that it is possible to build powerful AI responsibly. If they abandon the guardrails that led to this confrontation, they win back the government revenue but lose the argument that made them distinctive. If they hold the line, they keep their principles but cede enormous market share to competitors who have made no such promises.

Anthropic founded its identity on the idea that powerful AI and responsible development are compatible. The Pentagon deal tests whether that identity is sustainable.

What This Tells Us About AI Governance in America

Step back from the Anthropic story specifically, and what you see is a broader pattern taking shape. The United States government is moving aggressively to integrate artificial intelligence into its military infrastructure. It is doing so with partners who have agreed, explicitly or implicitly, to provide tools for 'all lawful operational use' without imposing their own ethical constraints on how those tools are deployed.

That is a significant policy choice. It means that the companies shaping America's military AI capabilities are those willing to subordinate their own safety judgements to the Pentagon's definition of lawful use. Companies with stricter positions are excluded. The market, in this case the defence budget, becomes the mechanism by which the government enforces a particular philosophy of AI deployment.

One can debate whether the Pentagon's approach is strategically wise. Autonomous weapons systems create escalation risks that senior military commanders, not just AI ethicists, have repeatedly flagged. The history of military technology is full of cases where speed of deployment outran the doctrine needed to govern it. But the administration has made its bet, and it has placed that bet with seven companies whose names are now attached to America's classified AI infrastructure.

Anthropic's name is not on that list. For now, that is a financial penalty. Over time, if the excluded company can hold its position in the courts and in the market, it may become something else: evidence that there is a viable path for AI companies that refuse to build everything their most powerful customers demand. Or it may simply become a cautionary tale about what happens to those who say no in Washington when the administration is not in a listening mood.

The next hearing is in California. The next negotiation is in Washington. The outcome of both will say a great deal about who actually gets to set the rules for artificial intelligence in America, and what those rules will permit.

Share this article

Related Articles