Anthropic sues the Pentagon as Claude’s app usage increases

Anthropic AI app, Claude, is climbing to the top of the download charts around the world – while the company is fighting a legal battle with the Pentagon for posing a national security risk.
In a complaint filed Monday in the U.S. District Court for the Northern District of California, Anthropic says the federal government launched an unprecedented crackdown on the company after it stopped short of its security measures. Anthropic says it does not want its AI to be used for deadly private wars or mass surveillance of Americans.
“Anthropic is bringing this suit because the federal government retaliated against it by revealing that policy,” the complaint states. “When Anthropic held firm in its judgment that Claude could not be safely or reliably used in private lethal warfare and mass surveillance of Americans, the President ordered every federal agency to ‘IMMEDIATELY Cease all use of Anthropic technology.'”
Everything we know about the upcoming DJI Drone ban in the United States
The fallout was swift and widespread. The General Services Administration terminated Anthropic’s governmentwide contract. The Department of Finance, the Federal Housing Finance Agency, the Department of State, and other government agencies have announced that they are severing ties with the company.
However this controversy seems to have done little to dampen the public’s enthusiasm for Anthropic products. If anything, users More enthusiastic now Anthropic will take on the Trump administration.
The company says it is now adding more than a million new users every day around the world – breaking its daily sign-up records since the controversy broke out.
Mashable Light Speed
Claude currently holds the top spot in Apple’s App Store in 16 countries, surpassing OpenAI’s ChatGPT and Google’s Gemini in more than 20 markets, according to data from AppFigures.
The case marks the culmination of growing tensions between Anthropic and the Department of Defense, which the Trump administration calls the Department of War. The company had a major contract that made its productive AI systems widely used throughout the Pentagon.
That relationship unraveled when Defense Secretary Pete Hegseth pushed to significantly expand the role of AI throughout the military, and sought unrestricted access to AI technology. The effort required every AI company with Pentagon contracts to renegotiate their contracts.
But because Anthropic had become the military’s preeminent AI provider — and Claude was reportedly the only advanced model allowed to work in classified programs — the company found itself at the center of a standoff between Hegseth and Trump.
The collapse was as much about human conflict as competing values, according to the New York Times. Pentagon Chief Technology Officer Emil Michael, a former Uber executive, has grown increasingly frustrated with Anthropic CEO Dario Amodei throughout the weeks of negotiations.
As negotiations broke down, Michael began negotiating with OpenAI — a company whose CEO, Sam Altman, had been a Trump favorite. Hours after the Pentagon’s deadline passed without a deal, Altman announced that OpenAI had reached an agreement with the Department of Defense.
The lawsuit contends the government’s actions — including Trump’s order to order all government agencies to immediately stop using Anthropic’s AI, and Secretary Hegseth’s designation of the company as a supply chain risk — violate the First Amendment, as well as the due process protections of the Fifth Amendment, and the Administrative Procedure Act.
Anthropic’s filing notes that the supply chain risk label has historically been reserved for foreign companies believed to pose a threat to national security. It has never been used before in an American company. The company is asking the court to declare the government’s actions illegal, and issue an injunction preventing their enforcement.
Articles
Artificial Intelligence Government



