Anthropic has denied accusations from the Trump administration that it could manipulate its AI model Claude during military operations. In a court filing, Thiyagu Ramasamy, Anthropic’s head of public sector, stated that the company lacks the capability to disable or alter the technology's behavior.
In response to Defense Secretary Pete Hegseth labeling Anthropic a supply-chain risk, leading to a ban on its software by the Department of Defense and other federal agencies, Anthropic has filed lawsuits challenging the constitutionality of the ban. The Pentagon argues that critical military systems could be jeopardized if the company interferes with active operations.
Ramasamy clarified that Anthropic does not maintain any back doors or ‘kill switches’ and cannot modify models during operations. Updates can only be made with government approval through a third-party cloud provider, Amazon Web Services. Military users do not share prompts or data inputted into Claude with Anthropic.
Executives also reject the notion of having veto power over military tactical decisions, stating their willingness to provide contractual assurances in this regard. However, negotiations broke down as Anthropic sought additional safeguards against AI misuse in combat scenarios.
The Defense Department is now working with third-party cloud service providers to mitigate supply-chain risks posed by Anthropic, but the legal battle and customer cancellations continue.







