How AI Can Foster Inclusion
By: Sarah Hoffman | December 9, 2021
We've spent a lot of time discussing the unintended bias that can easily creep into AI algorithms. But the same technology, properly designed and trained, can also be used to confront biases. A new generation of automated tools seeks to proactively promote inclusion in:
  • Facebook.
  • Twitter.
  • LinkedIn.
  • Print

Writing. Text IQ’s Unconscious Bias Detector identifies partiality in performance appraisals, reporting on issues such as male managers giving higher scores to male workers or certain types of employees getting more personality-focused feedback than work-focused comments.1 Content Moderator and check your writing for language that conflicts with a company’s preferred style or that could come across as biased, flagging terms like “blacklist” and messages like “Is that the best you can do?”2 Grammarly, which offers a service designed to improve written communication, launched a sensitivity feature that detects politically loaded terms, such as someone’s use of “Chinese Virus” instead of coronavirus or COVID-19.3 These checks can be useful for presentations, résumés, and social media posts, helping users avoid inadvertently offending people. San Francisco now uses a bias mitigation tool to remove race from police reports when deciding to charge suspects.4

Speech. Microsoft’s Group Transcribe app not only transcribes meetings in real-time but also translates them, enabling non-native speakers to more fully participate in meetings.5 New versions of PowerPoint catch non-inclusive language, including gender and sexuality bias and ethnic slurs in real time based on spoken words (see Figure 1). Siri no longer defaults to a female voice, which can perpetuate sexist stereotypes, and can speak in a variety of accents and languages.6 And while Big Tech companies like Microsoft, Google, Amazon, and Apple are still lagging behind in reducing racial bias in speech recognition technology (with accuracies of 73%, 69%, 69%, and 55% respectively), British speech recognition startup Speechmatics recently announced an accuracy rate of 83% for Black voices, after training its AI model with unlabeled data from social media and podcasts to help it learn different aspects of speech including accent, language, and intonation.7

Imagery. In May, Microsoft launched people filters to help their advertisers find relevant, inclusive imagery in seconds, enabling filtering by characteristics like gender, ethnicity, and age. 8 That same month, Google announced changes to its camera and imaging products, improving accuracy for dark skin tones based on a broader data set of images featuring black and brown faces.9 Snapchat’s “inclusive camera” effort captures a wide range of skin tones and also aims to remove biased assumptions when automatically adjusting people’s appearances, such as the assumption that smaller, thinner noses are better.10

Promising Advances Extend Efforts

While these tools advance inclusion for both individuals and corporations, we can take things even further with the help of additional, smarter AI:

Creating new tools for underserved communities. In 2019, Snapchat released a gender-changing AI filter. While many were concerned that the tool made light of a serious matter, some in the transgender community were happy to have a safe and easy way to explore themselves, including projecting what they would look like if they decided to pursue hormone therapy.11 Online gamers have been harassed when their voices don’t match their gender identity. Created to make gaming more fun, Modulate’s “voice skins”, which use machine learning to analyze the speech of a player and produce new speech with the exact same emotion, inflection, and cadence of the original player but in the voice of a chosen character, provide a privacy shield for gamers. And more than 100 early testers asked if the technology could be used to ease the dysphoria caused by a mismatch between their voice and gender identities.12 Using the powerful language generator GPT-3, Create Lab Ventures created an Afro-Latina, bilingual AI, which debuted in school systems worldwide in September to inspire and uplift children of color.13

Identifying our own unconscious biases. Identifying AI bias may help us better appreciate our own biases. In 2018, Amazon shut down its AI recruiting tool when it was discovered to be biased against women.14 The model was trained on 10-years of mostly male resumes. The system ended up prioritizing resumes with verbs like “executed” which were more commonly found in male resumes and penalizing resumes that contained the word “women’s”, such as mentioning “women’s chess club”. More recently, a video-analyzing AI recruiting tool was found to favor people with bookshelves behind them.15 Do human recruiters have similar biases? As companies develop and use fairness tools to uncover bias in their AI algorithms, these tools could potentially be a window into our own biases.

Tracking the changing landscape in real time. Acceptable terminology can change quickly. In June 2020, GitHub replaced the term “master” with “main” to avoid a slavery reference.16 That same year, employees from numerous companies, including more than 50 employees from Microsoft, initiated efforts to remove certain terminology from their source code, switching terms like whitelist and blacklist to allowlist and denylist and master-slave to primary-secondary.17 Today, AI is being used to monitor websites for things like product price drops, breaking news stories, and new job postings.18 This technology could be extended to point out wording changes that are happening online in real-time, as suggestions for a company to consider in their own communications.

Self-driving cars might look very different if they were designed with common disabilities in mind. Research has shown that drivers’ reaction times increase as they get older.19 Yet reacting to unexpected situations is precisely when driverless cars expect a human to intervene.20 Making things worse: almost no experiments were done on aging drivers and driverless cars. Snapchat’s gender-changing AI mentioned above might be very different if it were specifically designed as a tool to help transgendered people.

AI is a technology that’s coming of age. Although it gets a lot of bad press for both overlooking and reinforcing prejudices, perhaps when properly designed and trained with inclusion considered from the start, AI could more effectively counter our biases and support underserved communities. How might other tools and industries be reimagined if inclusion were considered upfront?

  • Facebook.
  • Twitter.
  • LinkedIn.
  • Print
9 “Google is trying to make its image processing more inclusive”
19 Salvia, E., Petit, C., Champely, S., Chomette, R., Di Rienzo, F., & Collet, C. (2016). Effects of age and task load on drivers’ response accuracy and reaction time when responding to traffic lights. Frontiers in aging neuroscience, 8, 169.
Please enter a valid e-mail address
Please enter a valid e-mail address
Important legal information about the e-mail you will be sending. By using this service, you agree to input your real e-mail address and only send it to people you know. It is a violation of law in some jurisdictions to falsely identify yourself in an e-mail. All information you provide will be used by Fidelity solely for the purpose of sending the e-mail on your behalf.The subject line of the e-mail you send will be " "

Your e-mail has been sent.

Your e-mail has been sent.

This website is operated by Fidelity Center for Applied Technology (FCAT)® which is part of Fidelity Labs, LLC (“Fidelity Labs”), a Fidelity Investments company. FCAT experiments with and provides innovative products, services, content and tools, as a service to its affiliates and as a subsidiary of FMR LLC. Based on user reaction and input, FCAT is better able to engage in technology research and planning for the Fidelity family of companies. is independent of Unless otherwise indicated, the information and items published on this web site are provided by FCAT and are not intended to provide tax, legal, insurance or investment advice and should not be construed as an offer to sell, a solicitation of an offer to buy, or a recommendation for any security by any Fidelity entity or any third-party. In circumstances where FCAT is making available either a product or service of an affiliate through this site, the affiliated company will be identified. Third party trademarks appearing herein are the property of their respective owners. All other trademarks are the property of FMR LLC.

This is for persons in the U.S. only.

245 Summer St, Boston MA

© 2008-2024 FMR LLC All right reserved |

Terms of Use | Privacy | Security | DAT Support