U.S. government study finds racial bias in facial
recognition tools
Send a link to a friend
[December 20, 2019] By
Jan Wolfe and Jeffrey Dastin
(Reuters) - Many facial recognition systems
misidentify people of color more often than white people, according to a
U.S. government study released on Thursday that is likely to increase
skepticism of technology widely used by law enforcement agencies.
The study by the National Institute of Standards and Technology (NIST)
found that, when conducting a particular type of database search known
as "one-to-one" matching, many facial recognition algorithms falsely
identified African-American and Asian faces 10 to 100 times more than
Caucasian faces.
The study also found that African-American females are more likely to be
misidentified in "one-to-many" matching, which can be used for
identification of a person of interest in a criminal investigation.
While some companies have played down earlier findings of bias in
technology that can guess an individual's gender, known as "facial
analysis," the NIST study was evidence that face matching struggled
across demographics, too.
Joy Buolamwini, founder of the Algorithmic Justice League, called the
report "a comprehensive rebuttal" of those saying artificial
intelligence (AI) bias was no longer an issue. The study comes at a time
of growing discontent over the technology in the United States, with
critics warning it can lead to unjust harassment or arrests.
For the report, NIST tested 189 algorithms from 99 developers, excluding
companies such as Amazon.com Inc <AMZN.O> that did not submit one for
review. What it tested differs from what companies sell, in that NIST
studied algorithms detached from the cloud and proprietary training
data.
China's SenseTime, an AI startup valued at more than $7.5 billion, had
"high false match rates for all comparisons" in one of the NIST tests,
the report said.
[to top of second column] |
People walk past a poster simulating facial recognition software at
the Security China 2018 exhibition on public safety and security in
Beijing, China October 24, 2018. REUTERS/Thomas Peter
SenseTime's algorithm produced a false positive more than 10% of the time when
looking at photos of Somali men, which, if deployed at an airport, would mean a
Somali man could pass a customs check one in every 10 times he used passports of
other Somali men.
SenseTime did not immediately return a request for comment.
Yitu, another AI startup from China was more accurate and had little racial
skew.
Microsoft Corp <MSFT.O> had almost 10 times more false positives for women of
color than men of color in some instances during a one-to-many test. Its
algorithm showed little discrepancy in a one-to-many test with photos just of
black and white males.
Microsoft said it was reviewing the report and did not have a comment on
Thursday evening.
Congressman Bennie Thompson, chairman of the U.S. House Committee on Homeland
Security, said the findings of bias were worse than feared, at a time when
customs officials are adding facial recognition to travel checkpoints.
"The administration must reassess its plans for facial recognition technology in
light of these shocking results," he said.
(Reporting by Jan Wolfe and Jeffrey Dastin; Additional reporting by Yingzhi Yang
in Beijing; Editing by Andy Sullivan and Leslie Adler)
[© 2019 Thomson Reuters. All rights
reserved.] Copyright 2019 Reuters. All rights reserved. This material may not be published,
broadcast, rewritten or redistributed.
Thompson Reuters is solely responsible for this content.
|