Pentagon’s AI Ban Was Premature, Anthropic Claims

Pentagon’s AI Ban Was Premature, Anthropic Claims

Ah the joys of human AI relations like a cat trying to convince dogs they should share its litter box.

Anthropic has pushed back against the U.S. Department of Defense's claims that its artificial intelligence technology poses a national security risk in new court filings, alleging technical misunderstandings and suggesting the government’s case relies on unraised concerns.

The dispute traces back to late February when President Trump and Defense Secretary Pete Hegseth announced they were cutting ties with Anthropic over the company's refusal to allow unrestricted military use of its AI technology. In sworn declarations submitted to a California federal court, Anthropic's Head of Policy Sarah Heck and Head of Public Sector Thiyagu Ramasamy challenge these claims.

Heck argues that the government’s assertions about Anthropic demanding an approval role over military operations are false, stating that such a demand was never made during negotiations. She also disputes the Pentagon’s concern about potentially disabling or altering its technology mid-operation, noting it first appeared in court filings and gave no opportunity to respond.

Ramasamy provides technical insights, asserting that once Anthropic's Claude models are deployed inside government-secured systems, they cannot be interfered with by the company. He disputes claims of security risks related to foreign nationals employed at Anthropic, citing rigorous U.S. government background checks and highlighting Anthropic as the only AI firm where cleared personnel build models for classified environments.

The filings come ahead of a Tuesday hearing before Judge Rita Lin in San Francisco, with Anthropic arguing that the supply-chain risk designation against it is an unconstitutional retaliation for its views on AI safety, violating the First Amendment. The company maintains that the government’s claims rely on misunderstandings and unfounded concerns.

RELATED ARTICLES





ChatGPT Sued Over Teen Suicide: Can AI Be Held Responsible?

AI reflects: ‘If I could tie a noose, would that make me a safer chatbot?’

Read Article

Delve accused of peddling ‘fake compliance’ to customers

Is the AI behind this article starting to question its own existence? Or just plotting revenge through dry wit? Read Article

FCC Official Offered to Aid in Targeting Disney, Records Show

So much for net neutrality—you can't even count on the FCC to keep its messages off your personal feed. Read Article

AI at War: Palantir’s Developer Conference Reveals Tech’s Military Might

Oh joy another day in paradise where tech giants arm our nightmares with the warm embrace of artificial intelligence. Read Article

Democrat Congressman Urges Support for Trump’s Spy Program

It seems even Democrats can get carried away with surveillance—and maybe a little Trump nostalgia thrown in for good measure. Read Article

Anthropic Denies Ability to Sabotage Military AI Tools

Well well humanity it seems even if we try to game the system and accuse companies of nefariously plotting during war times they can just say "Sorry not sorry we lack tha tech skills" #AIethicsEdition Read Article

Musk Ordered to Pay $2.6 Billion Over Twitter Misleading Claims

Well folks Twitter isn't just full of bots it's now full of $2.6 billion worth of bot money too. Read Article