Anthropic CEO says it 'cannot in good conscience accede' to Pentagon's
demands for AI use
[February 27, 2026]
By KONSTANTIN TOROPIN and MATT O'BRIEN
WASHINGTON (AP) — Anthropic CEO Dario Amodei said Thursday that the
artificial intelligence company “cannot in good conscience accede” to
the Pentagon’s demands to allow unrestricted use of its technology,
deepening a public clash with the Trump administration that is
threatening to pull its contract and take other drastic steps by Friday.
The maker of the AI chatbot Claude said in a statement that it’s not
walking away from negotiations but that new contract language received
from the Defense Department “made virtually no progress on preventing
Claude’s use for mass surveillance of Americans or in fully autonomous
weapons.”
Sean Parnell, the Pentagon’s top spokesman, said earlier on social media
that the military “has no interest in using AI to conduct mass
surveillance of Americans (which is illegal) nor do we want to use AI to
develop autonomous weapons that operate without human involvement.”
Anthropic’s policies prevent its models from being used for those
purposes. It’s the last of its peers — the Pentagon also has contracts
with Google, OpenAI and Elon Musk’s xAI — to not supply its technology
to a new U.S. military internal network.
“It is the Department’s prerogative to select contractors most aligned
with their vision,” Amodei wrote in a statement. “But given the
substantial value that Anthropic’s technology provides to our armed
forces, we hope they reconsider.”

Defense Secretary Pete Hegseth gave Anthropic an ultimatum on Tuesday
after meeting with Amodei: Allow the Pentagon to use the company's AI as
it sees fit by Friday or risk losing its government contract. Military
officials warned that they could go even further and designate the
company as a supply chain risk or invoke a Cold War-era law called the
Defense Production Act to give the military more sweeping authority to
use its products.
Amodei said Thursday that “those latter two threats are inherently
contradictory: one labels us a security risk; the other labels Claude as
essential to national security.”
In a post before Amodei's announcement, Parnell reiterated that the
Pentagon wants to “ use Anthropic’s model for all lawful purposes” but
didn’t offer details on what that entailed. He said opening up use of
the technology would prevent the company from “jeopardizing critical
military operations.”
“We will not let ANY company dictate the terms regarding how we make
operational decisions,” he said.
Emil Michael, defense undersecretary for research and engineering, later
lashed out at the Anthropic CEO, alleging on X that Amodei “has a
God-complex” and “wants nothing more than to try to personally control
the US Military and is ok putting our nation’s safety at risk.”
[to top of second column]
|

Pages from the Anthropic website and the company's logos are
displayed on a computer screen in New York on Thursday, Feb. 26,
2026. (AP Photo/Patrick Sison)

The talks that escalated this week began months ago. Amodei said
that if the Pentagon doesn't reconsider its position, Anthropic
“will work to enable a smooth transition to another provider.”
Sen. Thom Tillis, a North Carolina Republican who is not seeking
reelection, said the Pentagon has been handling the matter
unprofessionally while Anthropic is “trying to do their best to help
us from ourselves.”
“Why in the hell are we having this discussion in public?” Tillis
told reporters. “This is not the way you deal with a strategic
vendor that has contracts.”
He added, “When a company is resisting a market opportunity for fear
of negative consequences, you should listen to them and then behind
closed doors figure out what they’re really trying to solve.”
Sen. Mark Warner of Virginia, the ranking Democrat on the Senate
Intelligence Committee, said he was “deeply disturbed” by reports
that the Pentagon is “working to bully a leading U.S. company.”
“Unfortunately, this is further indication that the Department of
Defense seeks to completely ignore AI governance,” Warner said in a
statement. It “further underscores the need for Congress to enact
strong, binding AI governance mechanisms for national security
contexts.”
While Pentagon officials say they always will follow the law with
their use of AI models, the department has taken steps to change the
culture among the military legal ranks.
Hegseth told Fox News last February, weeks after becoming defense
secretary, that “ultimately, we want lawyers who give sound
constitutional advice and don’t exist to attempt to be roadblocks to
anything.”

The same month, Hegseth also fired the top lawyers for the Army and
the Air Force without explanation. The Navy’s top lawyer had
resigned shortly after the election in late 2024.
___
O'Brien reported from Providence, Rhode Island. Associated Press
writer Ben Finley contributed to this report.
All contents © copyright 2026 Associated Press. All rights reserved |