OpenAI reaches AI deal with Department of Defense after Anthropic conflict

SAN FRANCISCO – OpenAI, the maker of ChatGPT, said Friday it has reached an agreement with the Pentagon to provide its intelligence technology for classified programming, hours after President Donald Trump ordered federal agencies to stop using AI technology made by rival Anthropic.
Under the agreement, OpenAI agreed to allow the Pentagon to use its AI programs for any legitimate purpose, for as long as the Pentagon needed. But OpenAI also said it has found a way to ensure its technology will meet its security goals by adding some technical guardrails to its systems.
“Throughout our cooperation, the DoW has shown deep respect for security and the desire of our partners to achieve the best result,” said Sam Altman, CEO of OpenAI, in a social media post, using the initials of the Department of War, the name chosen by the administration of the Department of Defense.
The Defense Department did not immediately respond to a request for comment.
The deal proved to be a business and political coup for OpenAI, capitalizing on rival issues. Anthropic, which competes with OpenAI, has been at war with the Pentagon in recent weeks over how its AI can be used. In negotiations for the $200 million contract, the Pentagon demanded that it be able to use the Anthropic AI system for all official purposes, or it would cut the company off from government business.
But Anthropic said it needed terms that would ensure its AI technology could not be used for surveillance of Americans or autonomous lethal weapons. The Pentagon, on the other hand, said that a private contractor will not be able to decide how its equipment will be used for national security. Their disagreement came to the fore in the public this past month and intensified as both dug their heels in.
Anthropic and the Pentagon failed to agree on terms by the 5:01 pm deadline Friday. Defense Secretary Pete Hegseth then designated Anthropic as a “national security procurement risk,” a label that cuts the AI company out of business with the US government. Trump also weighed in, calling the startup a “Left AI company.”
Amidst the maelstrom, OpenAI stepped in. This past week, Altman publicly supported Anthropic’s position that AI should not be used for home surveillance or autonomous weapons. On CNBC on Friday, he said he has a lot of confidence in Anthropic and that they “really care about security.”
At the same time, Altman has been involved in talks with the Pentagon, since Wednesday, about its technology agreement, said two people familiar with the talks who spoke on condition of anonymity.
Altman negotiated with the Department of Defense in a different way than Anthropic, which favors the use of OpenAI technology for all legal purposes. Along the way, he also negotiated the right to install safeguards on OpenAI’s technology that would prevent its systems from being used in a way it doesn’t want them to be used.
OpenAI will “build technical safeguards to ensure that our models behave correctly, which is what DoW wants,” Altman said.
These moves allowed Altman to uphold the security principles surrounding AI while still receiving a Pentagon contract. He added that the Pentagon has agreed to let some OpenAI employees work alongside government officials on classified projects “to help with our models and ensure their security.”
Anthropic did not respond to a request for comment on the OpenAI deal.
(The New York Times sued OpenAI and Microsoft in 2023, accusing them of copyright infringement for news content related to AI systems. The two companies have denied those claims.)
Altman and Dario Amodei, CEO of Anthropic, have long been bitter rivals. Amodei and several other Anthropic founders previously worked at OpenAI. But they left in 2021 after disagreements with Altman and others about how AI should be funded, developed and rolled out.
At a recent AI conference in India, Altman and Amodei were caught on video refusing to hold hands during a photo session with Prime Minister Narendra Modi.
It may take some time for OpenAI technology to be implemented by the Pentagon. The company has yet to be approved for classified work in part because its technology is not available on Amazon’s cloud computing services, which is how the government often accesses classified systems.
That could change after OpenAI signed a partnership with Amazon on Friday. Amazon, a new investor in OpenAI, is pouring $50 billion into the AI startup as part of $110 billion in funding raised by OpenAI to pay for its continued growth and spur AI development.
OpenAI also recently signed leases for more than 430,000 square feet of office space in the South Bay, according to documents on file with the Santa Clara County Recorder’s Office.
The Pentagon may also use AI services from other Anthropic competitors. Google’s xAI and Elon Musk have contracts with the Department of Defense, and the Pentagon said last week that it had reached an agreement to use xAI technology in classified operations.
Google has had similar discussions, but it’s unclear where those discussions stand. In 2018, during Trump’s first administration, Google withdrew from a military contract after labor protests. It has since agreed to cooperate with the Pentagon again.
This past week, as the Pentagon threatened to cut ties with Anthropic, dozens of OpenAI employees signed an open letter urging other AI companies to support the idea that the technology is not used for home surveillance or autonomous weapons.
“They are trying to isolate each company for fear that the other will agree,” the letter reads, referring to the Pentagon. “That strategy only works if none of us know where the rest of us stand. This book helps create understanding and unity amid this pressure from the Department of Defense.”
This article first appeared in The New York Times.



