Recognizing the bias of artificial intelligence

“We have entered the age of automation — overconfident yet underprepared,” says Joy Buolamwini, in a video describing how commercial facial recognition systems fail to recognize the gender of one in three women of color. The darker the skin, the worse the results.

It’s the kind of bias that is worrying now that artificial intelligence (AI) is used to determine things like who gets a loan, who is likely to get a job and who is shown what on the internet, she says.

Commercial facial recognition systems are sold as accurate and neutral. But few efforts are made to ensure they are ethical, inclusive or respectful of human rights and gender equity, before they land in the hands of law enforcement agencies or corporations who may impact your life.

Joy Buolamwini is the founder of the Algorithmic Justice League, an initiative to foster discussion about biases of race and gender, and to develop new practices for technological accountability. Blending research, art and  activism, Buolamwini calls attention to the harmful bias of commercial AI products — what she calls the “coded gaze”. To inform the public and advocate for change, she has testified before the Federal Trade Commission in the United States, served on the European Union’s Global Tech Panel, written op-eds for major news publications and appeared as a keynote speaker at numerous academic, industry and media events.

On websites and in videos, she shares her lived experience and spoken word poetry, about a topic that is more commonly dealt with in dry, technical terms (or not at all).

Ain’t I a Woman?

 

The “coded gaze” refers to how commercial AI systems can see people in ways that mirror and amplify injustice in society. At the MIT Media Lab’s Center for Civic Media, Buolamwini has researched commercial facial analysis systems, illustrating how gender and racial bias and inaccuracies occur. Flawed and incomplete training data, false assumptions and lack of technical audits are among the numerous problems that lead to heightened risks.

To fight back, the Algorithmic Justice League and the Center on Privacy & Technology at Georgetown Law launched a Safe Face Pledge in December 2018. It’s a series of actionable steps companies can take to ensure facial analysis technology does not harm people. A handful of companies have signed the pledge and many leading AI researchers have indicated support.

It’s one of many initiatives Buolamwini and colleagues are experimenting with to elicit change from big tech companies. So far, she has found that drawing public attention to facial recognition biases has led to measurable reductions in inaccuracies. After Amazon attempted to discredit the findings of her research, leading AI experts fired back in April calling on the company to stop selling its facial recognition technology to law enforcement agencies.

More can be done, she says. “Both accurate and inaccurate use of facial analysis technology to identify a specific individual (facial recognition) or assess an attribute about a person (gender classification or ethnic classification) can lead to violations of civil liberties,” writes Buolamwini on the MIT Media Lab blog on Medium.

She says safeguards to mitigate abuse are needed. “There is still time to shift towards building ethical AI systems that respect our human dignity and rights,” says Buolamwini. “We have agency in shaping the future of AI, but we must act now to bend it towards justice and inclusion.”

How do you feel about facial recognition systems?

  1. Anonymous

    Until its ready its wise not to let it reach the market.

See Mozilla Community Participation guidelines: [English | Español | Deutsch | Français]. This is a moderated comment space. We will remove comments that are offensive or completely off topic.