US military reaches deals with 7 tech companies to use their AI on
classified systems
[May 02, 2026]
By BEN FINLEY and MATT O'BRIEN
WASHINGTON (AP) — The Pentagon said Friday that it has reached deals
with seven tech companies to use their artificial intelligence in its
classified computer networks, allowing the military to tap into
AI-powered capabilities to help it fight wars.
Google, Microsoft, Amazon Web Services, Nvidia, OpenAI, Reflection and
SpaceX will provide their resources to help “augment warfighter
decision-making in complex operational environments,” the Defense
Department said.
Notably absent from the list is AI company Anthropic, after its public
dispute and legal fight with the Trump administration over the ethics
and safety of AI usage in war.
The Defense Department has been rapidly accelerating its use of AI in
recent years. The technology can help the military reduce the time it
takes to identify and strike targets on the battlefield, while aiding in
the organization of weapons maintenance and supply lines, according to a
report in March from the Brennan Center for Justice.
But AI has already raised concerns that its use could invade Americans'
privacy or allow machines to choose targets on the battlefield. One of
the companies contracting with the Pentagon said its agreement required
human oversight in certain situations.
Concerns about military use of AI arose during Israel’s war against
militants in Gaza and Lebanon, with U.S. tech giants quietly empowering
Israel to track targets. But the number of civilians killed also soared,
fueling fears that these tools contributed to the deaths of innocent
people.

Questions about military use of AI still being worked out
The Pentagon's latest contracts come at a time of anxiety about the
potential for over-reliance on the technology on the battlefield, said
Helen Toner, interim executive director at Georgetown University’s
Center for Security and Emerging Technology.
“A lot of modern warfare is based on people sitting in command centers
behind monitors, making complicated decisions about confusing,
fast-moving situations,” said Toner, a former board member of OpenAI.
“AI systems can be helpful in terms of summarizing information or
looking at surveillance feeds and trying to identify potential targets.”
But questions about the appropriate levels of human involvement, risk
and training are still being worked out, she said.
“How do you roll out these tools rapidly for them to be effective and
provide strategic advantage?” Toner asked, “While also recognizing that
you need to train the operators and make sure they know how to use them
and don’t over trust them?”
Such concerns were raised by Anthropic. The tech company said it wanted
assurances in its contract that the military would not use its
technology in fully autonomous weapons and the surveillance of
Americans. Defense Secretary Pete Hegseth said the company must allow
for any uses the Pentagon deemed lawful.
Anthropic sued after President Donald Trump, a Republican, tried to stop
all federal agencies from using the company’s chatbot Claude and Hegseth
sought to label the company a supply chain risk, a designation meant to
protect against sabotage of national security systems by foreign
adversaries.
OpenAI had announced a deal with the Pentagon in March to effectively
replace Anthropic with ChatGPT in classified environments. OpenAI
confirmed in a statement Friday that it was the same agreement it
announced in early March.
“As we said when we first announced our agreement several months ago, we
believe the people defending the United States should have the best
tools in the world,” the company said.
[to top of second column]
|

The Pentagon is seen from Air Force One as it flies over Washington
on March 2, 2022. (AP Photo/Patrick Semansky, File)

One company's agreement with the Pentagon included language that
said there should be human oversight over any missions in which the
AI systems act autonomously or semiautonomously, according to a
person familiar with the agreement who was not authorized to speak
about it publicly. The language also said the AI tools must be used
in ways that are consistent with constitutional rights and civil
liberties.
Those resemble sticking points for Anthropic, though OpenAI has
previously said that it secured similar assurances when it made its
own deal with the Pentagon.
The Pentagon's point of view
Emil Michael, the Pentagon's chief technology officer, told CNBC on
Friday that it would have been irresponsible to rely on only one
company, an acknowledgment of the friction with Anthropic.
“And when we learned that one partner didn’t really want to work
with us in the way we wanted to work with them, we went out and made
sure that we had multiple different providers,” Michael said.
Some of the companies, including Amazon and Microsoft, have long
worked with the military in classified environments, and it was not
immediately clear if the new agreements significantly altered their
government partnerships. Others, such as chipmaker Nvidia and the
startup Reflection, are new to such work. Both companies make
open-source AI models, which Michael has described as a priority to
provide an “American alternative” to China's rapid development of AI
systems in which some key components are publicly accessible for
others to build upon.
The Pentagon said Friday that military personnel are already using
its AI capabilities through its official platform, GenAI.mil.
“Warfighters, civilians and contractors are putting these
capabilities to practical use right now, cutting many tasks from
months to days,” the Pentagon said, adding that the military's
growing AI capabilities will “give warfighters the tools they need
to act with confidence and safeguard the nation against any threat.”
In many cases, the military uses artificial intelligence the same
way civilians do: to take on rote tasks that would take humans hours
or days to complete, said Toner, of Georgetown University.
AI can be used to better predict when a helicopter needs maintenance
or figure out how to efficiently move large amounts of troops and
gear, she said. It can also help determine whether vehicles on a
drone's surveillance feeds are civilian or military.

But people shouldn't become overly dependent on it.
“There's a phenomenon called automation bias, where people can be
prone to assume that machines work better than they actually do,”
Toner said.
___
O'Brien reported from Providence, Rhode Island.
All contents © copyright 2026 Associated Press. All rights reserved |