After federal study finds racial bias in facial recognition tech, advocates renew calls for ban

The study found higher rates of false-positive matches for people of color.

December 20, 2019, 2:31 PM

Advocacy groups and lawmakers are renewing calls for a ban on government use of facial recognition technology in the wake of a sweeping new study that found a majority of the software exhibits a racial bias.

"This study makes it clear: the government needs to stop using facial recognition surveillance right now," Evan Greer, the deputy director of Fight for the Future, a nonprofit digital rights advocacy group, told ABC News in a statement.

"This technology has serious flaws that pose an immediate threat to civil liberties, public safety, and basic human rights," she added.

The study published Thursday from the U.S. Department of Commerce's National Institute of Standards and Technology (NIST) looked at 189 software algorithms from 99 developers, which encompasses a majority of the industry.

"Even if the algorithms improve in the future, biometric surveillance like face recognition is dangerous and invasive," Greer said. "Lawmakers everywhere should take action to ban the use of this nuclear-grade surveillance tech."  

PHOTO: In this Dec. 4, 2019 file photo commuters pass through the World Trade Center in New York.
In this Dec. 4, 2019 file photo commuters pass through the World Trade Center in New York.
Mark Lennihan/AP

"While it is usually incorrect to make statements across algorithms, we found empirical evidence for the existence of demographic differentials in the majority of the face recognition algorithms we studied," Patrick Grother, a NIST computer scientist and the report’s primary author said in a statement.

The group found higher rates of false-positives for Asian and African American faces compared to images of Caucasian faces for "one-to-one" matching, which confirms a photo matches a different photo of the same person in a database, according to an agency statement.

This report not only confirms these concerns, but shows facial recognition systems are even more unreliable and racially biased than we feared.

Depending on the algorithm, the false positive rates for one-to-one matches were 10 to 100 times higher for Asians and African-Americans than for Caucasians.

For "one-to-many" matches, which determine whether a photo of a person has any match in a database -- and is often what is used for identifying persons of interest -- the researchers found higher rates of false positives for African American women.

False-positives in one-to-many matches can carry particularly heavy consequences including false accusations, according to the researchers. False-positives for one-to-one matches are still of concern, as they can present security issues and allow impostors to access data and more.

"In a one-to-one search, a false negative might be merely an inconvenience -- you can’t get into your phone, but the issue can usually be remediated by a second attempt," Grother explained. "But a false positive in a one-to-many search puts an incorrect match on a list of candidates that warrant further scrutiny."

The team emphasized that different algorithms had very different results when it came to the prevalence of false-negatives.

Rep. Bennie G. Thompson, D-Miss., the chairman of the Committee on Homeland Security, said the report shows the technology is "more unreliable and racially biased than we feared" as the government ramps up its use.

"In recent years, the Department of Homeland Security has significantly increased its use of facial recognition technology on Americans and visitors alike, despite serious privacy and civil liberties concerns," Thompson said in a statement Thursday.

"This report not only confirms these concerns, but shows facial recognition systems are even more unreliable and racially biased than we feared," he added.

Thompson called on the Trump administration to "reassess its plans for facial recognition technology in light of these shocking results."

In a series of tweets reacting to the new study, the American Civil Liberties Union, which fought against the use of this technology in court, said government agencies "must immediately halt the use of this dystopian technology."

"One false match can lead to missed flights, long interrogations, tense police encounters, false arrests, or worse," the group tweeted. "Even government scientists are now confirming this surveillance technology is flawed and biased."

Privacy advocates have raised the alarm on facial recognition pitfalls for months.

In July, a U.K.-based report found that 80% of facial recognition suspects flagged by London’s Metropolitan Police were innocent.

Some governments have responded: San Francisco became the first U.S. city to block the use of facial recognition technology by all police and city agencies in May. The towns of Somerville and Brookline in Massachusetts also banned municipal use of the technology.

Makers of the technology argue it has a slew of positive applications, despite the controversy.

Amazon Rekognition says on its website that the tech can be used to help "humanitarian groups to identify and rescue human trafficking victims" and published a list of common misconceptions about it online.

Related Topics