The president blamed AI and embraced doing so. Is it becoming the new
'fake news'?
[September 04, 2025]
By LAURIE KELLMAN
Artificial intelligence, apparently, is the new “fake news."
Blaming AI is an increasingly popular strategy for politicians seeking
to dodge responsibility for something embarrassing — among others. AI
isn't a person, after all. It can't leak or file suit. It does make
mistakes, a credibility problem that makes it hard to determine fact
from fiction in the age of mis- and disinformation.
And when truth is hard to discern, the untruthful benefit, analysts say.
The phenomenon is widely known as “the liar's dividend.”
On Tuesday, President Donald Trump endorsed the practice. Asked about
viral footage showing someone tossing something out an upper-story White
House window, the president replied, “No, that's probably AI” — after
his press team had indicated to reporters that the video was real.
But Trump, known for insisting the truth is what he says it is, declared
himself all in on the AI-blaming phenomenon.
“If something happens that’s really bad," he told reporters, “maybe I’ll
have to just blame AI.”
He's not alone.
AI is getting blamed — sometimes fairly, sometimes not
On the same day in Caracas, Venezuelan Communications Minister Freddy
Ñáñez questioned the veracity of a Trump administration video it said
showed a U.S. strike on a vessel in Caribbean that targeted Venezuela’s
Tren de Aragua gang and killed 11. A video of the strike posted to Truth
Social shows a long, multi-engine speedboat at sea when a bright flash
of light bursts over it. The boat is then briefly seen covered in
flames.

“Based on the video provided, it is very likely that it was created
using Artificial Intelligence,” Ñáñez said on his Telegram account,
describing “almost cartoonish animation.”
Blaming AI can at times be a compliment. (“He’s like an AI-generated
player,” tennis player Alexander Bublik said of his U.S. Open opponent
Jannik Sinner's talent on ESPN ). But when used by the powerful, the
practice, experts say, can be dangerous.
Digital forensics expert Hany Farid warned for years about the growing
capabilities of AI “deepfake” images, voices and video to aid in fraud
or political disinformation campaigns, but there was always a deeper
problem.
“I’ve always contended that the larger issue is that when you enter this
world where anything can be fake, then nothing has to be real,” said
Farid, a professor at the University of California, Berkeley. “You get
to deny any reality because all you have to say is, ‘It’s a deepfake.’”
That wasn't so a decade or two ago, he noted. Trump issued a rare
apology ("if anyone was offended") in 2016 for his comments about
touching women without their consent on the notorious “Access Hollywood"
tape. His opponent, Democrat Hillary Clinton, said she was wrong to call
some of his supporters “a basket of deplorables.”
Toby Walsh, chief scientist and professor of AI at the University of New
South Wales in Sydney, said blaming AI leads to problems not just in the
digital world but the real world as well.
“It leads to a dark future where we no longer hold politicians (or
anyone else) accountable,” Walsh said in an email. “”It used to be that
if you were caught on tape saying something, you had to own it. This is
no longer the case."
Contemplating the ‘liar’s dividend'
Danielle K. Citron of the Boston University School of Law and Robert
Chesney of the University of Texas foresaw the issue in research
published in 2019. In it, they describe what they called “the liar's
dividend.”
[to top of second column]
|

President Donald Trump walks to sign executive orders during an
artificial intelligence summit at the Andrew W. Mellon Auditorium,
July 23, 2025, in Washington. (AP Photo/Julia Demaree Nikhinson,
File)

“If the public loses faith in what they hear and see and truth
becomes a matter of opinion, then power flows to those whose
opinions are most prominent—empowering authorities along the way,"
they wrote in the California Law Review. “A skeptical public will be
primed to doubt the authenticity of real audio and video evidence.”
Polling suggests many Americans are wary about AI. About half of
U.S. adults said the increased use of AI in daily life made them
feel “more concerned than excited,” according to a Pew Research
Center poll from August 2024. Pew’s polling indicates that people
have become more concerned about the increased use of AI in recent
years.
Most U.S. adults appear to distrust AI-generated information when
they know that’s the source, according to a Quinnipiac poll from
April. About three-quarters said they could only trust the
information generated by AI “some of the time” or “hardly ever.” In
that poll, about 6 in 10 U.S. adults said they were “very concerned”
about political leaders using AI to distribute fake or misleading
information.
They have reason, and Trump has played a sizable role in muddying
trust and truth.
Trump's history of misinformation and even lies to suit his
narrative predates AI. He's famous for the use of “fake news,” a
buzz term now widely known to denote skepticism about media reports.
Leslie Stahl of CBS' “60 Minutes” has said that Trump told her off
camera in 2016 that he tries to “discredit” journalists so that when
they report negative stories, they won't be believed.
Trump's claim on Tuesday that AI was behind the White House window
video wasn't his first attempt to blame AI. In 2023, he insisted
that the anti-Trump Lincoln Project used AI in a video to make him
“look bad.”
In the spot titled ” Feeble,” a female narrator taunts Trump. “Hey
Donald ... you’re weak. You seem unsteady. You need help getting
around.” She questions his ”manhood," accompanied by an image of two
blue pills. The video continues with footage of Trump stumbling over
words.

“The perverts and losers at the failed and once-disbanded Lincoln
Project, and others, are using A.I. (Artificial Intelligence) in
their Fake television commercials in order to make me look as bad
and pathetic as Crooked Joe Biden,” Trump posted on Truth Social.
The Lincoln Project told The Associated Press at the time that AI
was not used in the spot.
___
Associated Press writers Ali Swenson in New York, Matt O'Brien in
Providence, Rhode Island, Linley Sanders in Washington and Jorge
Rueda in Caracas, Venezuela, contributed to this report.
All contents © copyright 2025 Associated Press. All rights reserved |