Tech industry tried reducing AI's pervasive bias. Now Trump wants to end
its 'woke AI' efforts
[April 28, 2025] By
MATT O'BRIEN
CAMBRIDGE, Mass. (AP) — After retreating from their workplace diversity,
equity and inclusion programs, tech companies could now face a second
reckoning over their DEI work in AI products.
In the White House and the Republican-led Congress, "woke AI” has
replaced harmful algorithmic discrimination as a problem that needs
fixing. Past efforts to “advance equity” in AI development and curb the
production of “harmful and biased outputs” are a target of
investigation, according to subpoenas sent to Amazon, Google, Meta,
Microsoft, OpenAI and 10 other tech companies last month by the House
Judiciary Committee.
And the standard-setting branch of the U.S. Commerce Department has
deleted mentions of AI fairness, safety and “responsible AI” in its
appeal for collaboration with outside researchers. It is instead
instructing scientists to focus on “reducing ideological bias” in a way
that will “enable human flourishing and economic competitiveness,”
according to a copy of the document obtained by The Associated Press.
In some ways, tech workers are used to a whiplash of Washington-driven
priorities affecting their work.
But the latest shift has raised concerns among experts in the field,
including Harvard University sociologist Ellis Monk, who several years
ago was approached by Google to help make its AI products more
inclusive.
Back then, the tech industry already knew it had a problem with the
branch of AI that trains machines to “see” and understand images.
Computer vision held great commercial promise but echoed the historical
biases found in earlier camera technologies that portrayed Black and
brown people in an unflattering light.

“Black people or darker skinned people would come in the picture and
we’d look ridiculous sometimes,” said Monk, a scholar of colorism, a
form of discrimination based on people’s skin tones and other features.
Google adopted a color scale invented by Monk that improved how its AI
image tools portray the diversity of human skin tones, replacing a
decades-old standard originally designed for doctors treating white
dermatology patients.
“Consumers definitely had a huge positive response to the changes,” he
said.
Now Monk wonders whether such efforts will continue in the future. While
he doesn't believe that his Monk Skin Tone Scale is threatened because
it's already baked into dozens of products at Google and elsewhere —
including camera phones, video games, AI image generators — he and other
researchers worry that the new mood is chilling future initiatives and
funding to make technology work better for everyone.
“Google wants their products to work for everybody, in India, China,
Africa, et cetera. That part is kind of DEI-immune," Monk said. “But
could future funding for those kinds of projects be lowered? Absolutely,
when the political mood shifts and when there’s a lot of pressure to get
to market very quickly.”
Trump has cut hundreds of science, technology and health funding grants
touching on DEI themes, but its influence on commercial development of
chatbots and other AI products is more indirect. In investigating AI
companies, Republican Rep. Jim Jordan, chair of the judiciary committee,
said he wants to find out whether former President Joe Biden's
administration “coerced or colluded with" them to censor lawful speech.
Michael Kratsios, director of the White House's Office of Science and
Technology Policy, said at a Texas event this month that Biden's AI
policies were “promoting social divisions and redistribution in the name
of equity.”

The Trump administration declined to make Kratsios available for an
interview but quoted several examples of what he meant. One was a line
from a Biden-era AI research strategy that said: “Without proper
controls, AI systems can amplify, perpetuate, or exacerbate inequitable
or undesirable outcomes for individuals and communities.”
Even before Biden took office, a growing body of research and personal
anecdotes was attracting attention to the harms of AI bias.
One study showed self-driving car technology has a hard time detecting
darker-skinned pedestrians, putting them in greater danger of getting
run over. Another study asking popular AI text-to-image generators to
make a picture of a surgeon found they produced a white man about 98%
percent of the time, far higher than the real proportions even in a
heavily male-dominated field.
[to top of second column] |

Ellis Monk, professor of Sociology at Harvard University and
developer of the Monk Skin Tone Scale, poses at his office,
Wednesday, Feb. 26, 2025, in Cambridge, Mass. (AP Photo/Charles
Krupa)
 Face-matching software for unlocking
phones misidentified Asian faces. Police in U.S. cities wrongfully
arrested Black men based on false face recognition matches. And a
decade ago, Google’s own photos app sorted a picture of two Black
people into a category labeled as “gorillas.”
Even government scientists in the first Trump administration
concluded in 2019 that facial recognition technology was performing
unevenly based on race, gender or age.
Biden's election propelled some tech companies to accelerate their
focus on AI fairness. The 2022 arrival of OpenAI's ChatGPT added new
priorities, sparking a commercial boom in new AI applications for
composing documents and generating images, pressuring companies like
Google to ease its caution and catch up.
Then came Google's Gemini AI chatbot — and a flawed product rollout
last year that would make it the symbol of “woke AI” that
conservatives hoped to unravel. Left to their own devices, AI tools
that generate images from a written prompt are prone to perpetuating
the stereotypes accumulated from all the visual data they were
trained on.
Google's was no different, and when asked to depict people in
various professions, it was more likely to favor lighter-skinned
faces and men, and, when women were chosen, younger women, according
to the company's own public research.
Google tried to place technical guardrails to reduce those
disparities before rolling out Gemini's AI image generator just over
a year ago. It ended up overcompensating for the bias, placing
people of color and women in inaccurate historical settings, such as
answering a request for American founding fathers with images of men
in 18th century attire who appeared to be Black, Asian and Native
American. Google quickly apologized and temporarily pulled the plug
on the feature, but the outrage became a rallying cry taken up by
the political right.
With Google CEO Sundar Pichai sitting nearby, Vice President JD
Vance used an AI summit in Paris in February to decry the
advancement of “downright ahistorical social agendas through AI,”
naming the moment when Google’s AI image generator was “trying to
tell us that George Washington was Black, or that America’s
doughboys in World War I were, in fact, women.”

“We have to remember the lessons from that ridiculous moment,” Vance
declared at the gathering. "And what we take from it is that the
Trump administration will ensure that AI systems developed in
America are free from ideological bias and never restrict our
citizens’ right to free speech.”
A former Biden science adviser who attended that speech, Alondra
Nelson, said the Trump administration's new focus on AI's
“ideological bias” is in some ways a recognition of years of work to
address algorithmic bias that can affect housing, mortgages, health
care and other aspects of people's lives.
“Fundamentally, to say that AI systems are ideologically biased is
to say that you identify, recognize and are concerned about the
problem of algorithmic bias, which is the problem that many of us
have been worried about for a long time,” said Nelson, the former
acting director of the White House's Office of Science and
Technology Policy who co-authored a set of principles to protect
civil rights and civil liberties in AI applications.
But Nelson doesn't see much room for collaboration amid the
denigration of equitable AI initiatives.
“I think in this political space, unfortunately, that is quite
unlikely,” she said. “Problems that have been differently named —
algorithmic discrimination or algorithmic bias on the one hand, and
ideological bias on the other —- will be regrettably seen us as two
different problems.”
All contents © copyright 2025 Associated Press. All rights reserved
 |