ARTIFICIAL INTELLIGENCE
Building Trust in AI Systems
By: COLLEEN MCCRETTON | January 28, 2021
Bias in data used by AI algorithms is drawing increasing attention. The internet is full of examples of AI systems bias: recruiting algorithms trained on data that favored male candidates, facial recognition software unable to appropriately identify people of color, medical systems trained with data that is not sufficiently diverse, and many other examples. However, there is another aspect to bias that impacts AI systems and bears some scrutiny as well - the cognitive biases that users bring to the table.
  • Facebook.
  • Twitter.
  • LinkedIn.
  • Print

The human brain processes a lot of information and often uses “shortcuts” to help classify information, which can create cognitive biases. People tend to believe data that supports their current thinking (confirmation bias), the first piece of data they see (anchoring) or to trust the first data they notice while ignoring other data (attentional bias). Each of these user biases can impact the effectiveness of an AI system. In a recent project, my FCAT team was tasked with processing data to uncover new insights for business’s stakeholders. In this project we found that the stakeholders believed the system when it confirmed their thinking but didn’t trust it when it presented new or different information.

As luck would have it, I was recently able to participate in a session for World Usability Day1 that focused on the topic of appropriate trust of users in AI systems. The consensus reached from the discussion was that transparency and explainability will be critical for future adoption – both for the user experience and ethical considerations of the systems. The discussion highlighted the notion of Explainable AI (XAI), which is a developing field that supports the notion that algorithms cannot be “black boxes”. The idea is gaining industry traction. For example, the Defense Advanced Research Projects Agency (DARPA) has an XAI research program2 that aims to further machine learning techniques that are equally high-performing and understandable to human beings. At some point, the notion of XAI is not going to be optional. In the EU, as an example, the General Data Protection Regulation (GDPR) Article 223 forbids decision-making based solely on automated processing.

For AI systems to be the most useful, they need to engender the appropriate level of trust in users. Users need to be able to tell when the system should be trusted and when it should be questioned. This is even more challenging, and important, when cognitive biases are in play. Another factor to consider are the “stakes” involved – e.g. an algorithm that recommends videos has more margin for error than one that approves loans, suggests medical diagnoses and courses of treatment, or is used in national defense.

To achieve higher levels of trust in AI systems, technologists, data scientists, and data engineers need to embrace the concept of XAI and work on their algorithms so that their output enables UX designers to:

  • communicate transparently about how the algorithms work
  • explain any biases in the data
  • build “scaffolding” to develop user trust in systems and account for cognitive biases
  • support users in making better decisions about when to trust and when to question the output of AI enabled systems

Colleen McCretton is Director, User Experience Design in FCAT

 
  • Facebook.
  • Twitter.
  • LinkedIn.
  • Print
close
Please enter a valid e-mail address
Please enter a valid e-mail address
Important legal information about the e-mail you will be sending. By using this service, you agree to input your real e-mail address and only send it to people you know. It is a violation of law in some jurisdictions to falsely identify yourself in an e-mail. All information you provide will be used by Fidelity solely for the purpose of sending the e-mail on your behalf.The subject line of the e-mail you send will be "Fidelity.com: "

Your e-mail has been sent.
close

Your e-mail has been sent.

This website is operated by Fidelity Center for Applied Technology (FCAT)® which is part of Fidelity Labs, LLC (“Fidelity Labs”), a Fidelity Investments company. FCAT experiments with and provides innovative products, services, content and tools, as a service to its affiliates and as a subsidiary of FMR LLC. Based on user reaction and input, FCAT is better able to engage in technology research and planning for the Fidelity family of companies. FCATalyst.com is independent of fidelity.com. Unless otherwise indicated, the information and items published on this web site are provided by FCAT and are not intended to provide tax, legal, insurance or investment advice and should not be construed as an offer to sell, a solicitation of an offer to buy, or a recommendation for any security by any Fidelity entity or any third-party. In circumstances where FCAT is making available either a product or service of an affiliate through this site, the affiliated company will be identified. Third party trademarks appearing herein are the property of their respective owners. All other trademarks are the property of FMR LLC.


This is for persons in the U.S. only.


245 Summer St, Boston MA

© 2008-2024 FMR LLC All right reserved | FCATalyst.com


Terms of Use | Privacy | Security | DAT Support