TLDR
- OpenAI CEO Sam Altman acknowledged that the deal with the Pentagon was hastily arranged and appeared “opportunistic and sloppy.”
- OpenAI is modifying the agreement to ensure its AI tools will not be utilized for domestic surveillance of U.S. citizens.
- The Pentagon has confirmed that intelligence agencies like the NSA will not use OpenAI’s tools.
- The agreement was finalized shortly after former President Trump prohibited federal agencies from using Anthropic’s AI tools.
- Altman publicly advocated for Anthropic to be offered the same contractual conditions as OpenAI.
OpenAI Revises Pentagon Deal Following Criticism; Altman Acknowledges Hasty Announcement
OpenAI CEO Sam Altman has conceded that the company’s recent agreement with the U.S. Department of Defense was not handled well. In a post on X, which he described as an internal memo, he stated that the company “shouldn’t have rushed” the announcement of the deal.
Here is re-post of an internal post:
We have been working with the DoW to make some additions in our agreement to make our principles very clear.
1. We are going to amend our deal to add this language, in addition to everything else:
“• Consistent with applicable laws,…
— Sam Altman (@sama)
“We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy,” Altman stated.
The deal was announced last Friday, just hours after President Donald Trump ordered federal agencies to cease using Anthropic’s AI tools. It also came shortly before the U.S. conducted strikes in Iran.
The timing of the announcement quickly drew criticism online. Reports indicate that many users deactivated their ChatGPT accounts and began using Anthropic’s Claude app following the news.
OpenAI is currently collaborating with the Pentagon to amend the contract terms. These revisions are intended to more clearly articulate the company’s principles within the formal agreement.
A significant addition stipulates that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.” Additionally, the Pentagon has confirmed that intelligence agencies, such as the NSA, will not utilize OpenAI’s tools.
Altman noted that any future services provided to these agencies would necessitate a separate contract modification.
The Anthropic Situation Preceded This Development
This situation arises after discussions between Anthropic and the Defense Department reached an impasse. Anthropic had sought assurances that its tools would not be employed for domestic surveillance or for the development of autonomous weapons without human oversight.
Defense Secretary Pete Hegseth announced on Friday that Anthropic would be classified as a supply-chain threat following the breakdown of negotiations. Government officials had reportedly expressed concerns for months regarding Anthropic’s strong emphasis on AI safety.
The dispute became public knowledge after it was revealed that Anthropic’s AI had been used by the U.S. military during a January operation to apprehend Venezuelan president Nicolás Maduro. Anthropic did not publicly object to this use at the time.
In fact, Anthropic was the first AI company to deploy models on the Defense Department’s classified network, following an initial agreement last year.
Altman Advocates for Equal Treatment for Anthropic
In his post, Altman also directly addressed the repercussions for Anthropic. He mentioned speaking with officials over the weekend and opposing the supply-chain threat designation.
“I reiterated that Anthropic should not be designated as a supply chain risk, and that we hope the Department of Defense offers them the same terms we’ve agreed to,” he wrote.
Anthropic was established in 2021 by former OpenAI employees who departed due to differing views on the company’s trajectory.
The company has positioned itself as an AI firm prioritizing safety. The Pentagon has not yet issued a public response to Altman’s request for equal terms.