Big Tech

Tech giants back Anthropic in dispute with US government

Admin

By: Admin

Tuesday, March 17, 2026

Mar 17, 2026

4 min read

Several of the largest technology companies in the United States have thrown their support behind artificial intelligence firm Anthropic in its legal challenge against the administration of Donald Trump, marking an unusual moment of alignment across Silicon Valley.

Since the start of the week, companies including Google, Amazon, Apple and Microsoft have filed statements supporting Anthropic’s lawsuit. The case challenges a decision by US Defence Secretary Pete Hegseth to designate the company a “supply chain risk”, a move that effectively bars its technology from government use.

Anthropic argues the decision was retaliatory and linked to its refusal to allow its artificial intelligence tools to be used for mass surveillance or autonomous weapons systems. The company has also said its public statements on the issue triggered criticism from senior figures in the administration.

In court filings, the supporting technology firms warned that the government’s actions could have consequences beyond a single company.

Microsoft, which maintains extensive contracts with the US government and defence agencies, said the decision could create “broad negative ramifications for the entire technology sector”. The company added that AI systems should not be used for domestic mass surveillance or to enable autonomous systems capable of initiating conflict without human oversight.

Growing support across the tech sector

Support for Anthropic has come from multiple directions.

A joint amicus brief — a legal filing made by parties with an interest in the case — was submitted by several industry groups, including Chamber of Progress. The organisation represents a coalition of technology companies including Google, Apple, Amazon and Nvidia.

The group said it was concerned that the government’s decision could undermine protections for speech under the First Amendment of the US Constitution. It also argued that penalising a company for public statements could have wider implications for how businesses engage in political or ethical debates.

Another amicus brief was filed by dozens of employees from companies including OpenAI and Google. Separately, nearly two dozen former senior US military officials also submitted a filing warning that the government’s actions could discourage companies from engaging with national security projects.

They argued the decision could send a message that working with defence institutions carries the risk of retaliation if disagreements emerge.

Not all major technology firms have publicly joined the effort. Meta Platforms has so far remained absent from the coalition backing Anthropic. The company previously belonged to the Chamber of Progress but withdrew from the group in 2025.

The origins of the dispute

The conflict began earlier this year during negotiations between Anthropic and the Department of Defense over the terms governing how its AI tools could be used by government agencies.

Anthropic said it insisted that language remain in its contracts preventing the use of its systems for mass surveillance or autonomous weapons. According to the company, officials sought to remove those restrictions.

Talks continued for several weeks before the disagreement became public in February. Anthropic chief executive Dario Amodei later confirmed that the company would not agree to eliminate the safeguards.

Soon after, President Trump criticised the company publicly and announced that its technology — including its Claude AI system — would be removed from use across government agencies.

The Department of Defense subsequently designated Anthropic a “supply chain risk”, citing concerns about security. It is believed to be the first time an American technology firm has received such a classification.

During a court hearing in San Francisco this week, Anthropic’s legal team said defence officials had contacted customers directly and urged them to reconsider working with the company.

Lawyers representing the government did not deny that outreach had occurred but declined to say whether further action against Anthropic was planned.

What this may mean

The dispute highlights a growing tension between the rapid development of artificial intelligence and questions about how governments seek to regulate or deploy the technology.

AI systems are increasingly viewed as strategically important in areas ranging from national security to economic competition. As a result, governments may push technology companies to provide tools that align with defence priorities.

At the same time, many AI developers have introduced ethical guidelines designed to limit how their systems are used. Conflicts between those guidelines and government demands could become more common.

The case may also raise questions about the relationship between Silicon Valley and political power. Several technology executives have maintained ties with the Trump administration, making the industry’s collective response notable.

If the court sides with Anthropic, it could reinforce the ability of companies to refuse certain uses of their technology without facing government penalties. A ruling in favour of the government, however, might strengthen federal authority to restrict companies it considers national security risks.

Either outcome could shape how AI companies negotiate future contracts with governments.

For now, the case is likely to be closely watched across the technology sector. As artificial intelligence becomes more central to both economic growth and military capability, the balance between corporate autonomy, free expression and national security is likely to remain a contentious issue.

Share this article

Related Articles