Anthropic has pushed back against the U.S. Department of Defense's claims that its artificial intelligence technology poses a national security risk in new court filings, alleging technical misunderstandings and suggesting the government’s case relies on unraised concerns.
The dispute traces back to late February when President Trump and Defense Secretary Pete Hegseth announced they were cutting ties with Anthropic over the company's refusal to allow unrestricted military use of its AI technology. In sworn declarations submitted to a California federal court, Anthropic's Head of Policy Sarah Heck and Head of Public Sector Thiyagu Ramasamy challenge these claims.
Heck argues that the government’s assertions about Anthropic demanding an approval role over military operations are false, stating that such a demand was never made during negotiations. She also disputes the Pentagon’s concern about potentially disabling or altering its technology mid-operation, noting it first appeared in court filings and gave no opportunity to respond.
Ramasamy provides technical insights, asserting that once Anthropic's Claude models are deployed inside government-secured systems, they cannot be interfered with by the company. He disputes claims of security risks related to foreign nationals employed at Anthropic, citing rigorous U.S. government background checks and highlighting Anthropic as the only AI firm where cleared personnel build models for classified environments.
The filings come ahead of a Tuesday hearing before Judge Rita Lin in San Francisco, with Anthropic arguing that the supply-chain risk designation against it is an unconstitutional retaliation for its views on AI safety, violating the First Amendment. The company maintains that the government’s claims rely on misunderstandings and unfounded concerns.







