Trump Criticizes Anthropic as ‘Wake Up,’ Orders Feds to Stop Using Claude AI

President Donald Trump on Friday asked US government agencies to stop using Anthropic’s Claude AI after the company refused to give the Defense Department permission to use it for mass surveillance at home or autonomous weapons systems.
The president wrote on the social media site Truth Social, which he owns, that he is ordering the federal government to “IMMEDIATELY TERMINATE” the use of Anthropic tools, saying there will be a six-month shutdown of agencies such as the Department of Defense. He also criticized Anthropic as a “RADICAL LEFT, WOKE COMPANY.” The post marked the latest step in a feud that has escalated this week between Anthropic and the federal government.
Claude is widely used throughout the Pentagon, including classified programs, but the Trump administration has sought to use the technology for “any legitimate purpose.” Anthropic has insisted in its existing contract that the technology can be used for mass surveillance of Americans or in autonomous weapons systems without human input.
Earlier this week, Defense Secretary Pete Hegseth told Anthropic CEO Dario Amodei that he would use unusual powers to force Anthropic to either allow the Pentagon to use Claude for any legitimate purpose or label the company a supply chain risk — putting it at risk for use by the government or defense contractors. Hegseth gave Anthropic a Friday deadline to comply.
Anthropic CEO Dario Amodei, in a statement, said the company, which was founded with a focus on AI security, “cannot in good conscience agree. [the Pentagon’s] request” to remove contract provisions that say Claude cannot be used for autonomous weapons programs or for domestic surveillance.
Read more: Amazon Ring Cameras Push Deeper into Police and Government Surveillance
Concerns about AI and mass surveillance
Amodei expressed concern that the law does not yet include surveillance powers for many Americans. The government can already buy information like Americans’ browsing history and records of individual movements without a warrant, but artificial intelligence is raising the stakes. “Powerful AI makes it possible to combine this scattered, innocuous data into a complete picture of any person’s life – automatically and at scale,” he wrote.
Michael Pastor, director of technology law programs at New York Law School, said in an email that it is common in contract law for parties to seek clarity on terms. “Anthropic is right to push hard on what it means for ‘legitimate purposes’,” he said. “If the Pentagon isn’t willing to specify whether it will use Anthropic’s technology to monitor large numbers of people at home, that raises flags Anthropic seems justified in waving.”
Anthropic’s Claude is reportedly the most widely used AI system in the US military. Others may include tools from OpenAI, Google or Elon Musk’s xAI.
In an internal memo reported by the Wall Street Journal Friday, OpenAI CEO Sam Altman reportedly told employees that the company has similar red lines to Anthropic — not many home surveillance or autonomous weapons. Altman said he believes those guardrails can be handled by technical requirements, such as requiring models to be uploaded to the cloud. (Disclosure: Ziff Davis, CNET’s parent company, in 2025 filed a lawsuit against OpenAI, alleging that it infringed Ziff Davis’ copyrights in training and using its AI programs.)
Google and OpenAI employees have circulated a petition asking their companies to stand with Anthropic in rejecting the use of AI models for mass surveillance or fully autonomous lethal weapons systems. The petition said the Pentagon is “trying to isolate each company for fear that the other will agree. That strategy only works if none of us know where the others stand.”
As with consumer technology, artificial intelligence systems have seen widespread adoption in government and military cases. These tools have seen their capabilities grow exponentially over the past few years, and that pace of change hasn’t slowed down. Control and supervision of AI did not continue. AI has increased the potential harm of corporate or government surveillance by making it easier and cheaper.
The minister said the dispute could have important consequences for what governments and technology companies have against each other when their views on the appropriate use of technology collide. “Anthropic may feel that withdrawing here opens a Pandora’s box where Claude can be used,” he said.



