Nov 14
1.0
Section Text 1.1
Artificial Intelligence

A Conversation with Juliette Powell: Author of The AI Dilemma

By: Sarah Hoffman | December 6, 2023
Share
founderiepitching Pitch Boot CampIntrapranuershipartificial IntelligenceBlockchain

In her recent book, coauthored with Art Kleiner, Juliette Powell, an independent researcher, entrepreneur, and keynote speaker at the intersection of technology and business, discusses seven principles for responsible technology.

Sarah Hoffman, VP of AI and Machine Learning Research in FCAT, spoke with Juliette about how organizations can embrace responsible technology.


When

Thursday, November 14, 2023

10:00 A.M. – 11:00 A.M. ET

Where

Zoom

Meeting ID:

  • Facebook.
  • Twitter.
  • LinkedIn.
  • Print
  1. In The AI Dilemma, when discussing the principle “Embrace Creative Friction”, you mention Professor Dorothy Leonard, who says, “If you want an innovative organization, you need to hire, work with, and promote people who make you uncomfortable.” How do you incorporate that principle into the work you do?

    I think I make a lot of people uncomfortable. I think I get brought in specifically because I’m an outsider. I’m a woman of color. I’m 6 feet tall. I do not toe the line, and I tell people exactly what I think. That could be incredibly scary and intimidating, especially in a corporate environment. So I am the perfect example of what I’m saying is necessary. Because there’s a reason why people bring me in. Within the organization there isn’t enough diversity, so they have to find it elsewhere. No matter how you end up doing it – whether you hire for it, or bring in people from the outside, or you nurture an internal and external ecosystem which is probably the healthiest of all – try it. See what happens if you haven’t done it before. If you have done it before, then hopefully you have the metrics to reinforce it as a positive practice.

  2. The principle “Confront and Question Bias” discusses the dual responsibility to include diverse voices in the conversation without burdening people from those groups to be diversity champions on top of their work responsibilities. What are some organizations that have done this particularly well? What can other organizations learn from them?

    One of my go-to examples is -- and this is a huge disclaimer: it’s being run by somebody that I care for very, very much, Astro Teller -- X, which is a division of Alphabet and is, in my opinion, incredibly well run. If you're not familiar with X, they're the moonshot factory. [They develop] so many projects that are big picture and are good for the world, not just good for the company. Because their timeline is much longer -- it's not about the quarter or the year -- success means something else than it would for their products and services divisions. I love their approach, and I think that they really have their heart and their mind in the right place. And again, this idea of bringing in just very, very divergent people and cross disciplinarians, too. People that know how to do more than one thing mesh well with others that know how to do more than one thing, and one of the key elements there, which I think we can all take a lesson from, is leave your ego at the door. You need to be able to play well with others. You're not the only expert in the room, and you have to be able to leave room for other expertise.

    Another company that does it really well that I just discovered -- they're not even considered a technology company -- the Lincoln Center here in New York has been incorporating more and more AI, not just into their processes, but also in their shows. I was just there a couple of nights ago for the opening of the Malcolm X opera, and they had fellows from across New York, really a very, very different group of young people who were essentially working at nonprofits of various kinds, and they were being brought in to the Lincoln Center in this fellowship program for two years, where they get to mix and match and help each other's nonprofits, but at the same time they will be placed on boards of Lincoln Center resident organizations. They’re not just reviving the arts in New York, but also diversifying the arts in New York, and really starting at the top. I think it's one of the best examples I've seen in the city of really actively injecting all types of diversity in action simultaneously.

  3. In the conclusion of your book, you discuss ChatGPT and generative AI. "It's all happening so quickly that by the time this book is published in summer 2023, we may need to write a sequel." If you were working a sequel now, what's the main thing you would want organizations to know that may differ from what's in the book today?

    If I were to write the book right now, I would write the entire book about the calculus of intentional risk. This idea that different organizations of different sizes have a very different reputational risk. If you're deploying models that are coming from third parties, how much of your intellectual property is being leaked out? What happens if regulation comes down the pipeline and you get sued? What does your insurance say about any of this stuff? Is big tech really indemnifying you or not? How much does it actually cost to scale generative AI within your organization for a particular use case? If you are using AI, do you really have to be using AI? All of these things are part of a larger calculus of intentional risk. When companies look at their AI governance and they’re doing their risk benefit assessment, how long term is it? Are they also looking at the potential of refining the model? What happens if it keeps hallucinating and doesn't end up getting better and it ends up costing them far more than they anticipated? Are they calculating that? And on the other side, are they calculating what it's going to cost them if they don't take these other risks in the short, medium, and long term?

  4. What is one thing you want people to take away from your book?

    It’s not just data; it’s people. The data just describes actual human lives. For financial services, you might think of the data as money or information that surrounds money. But, beyond that, it's people's savings. It's people's lives. It's people's hopes and people's dreams. That's why your responsibility is much broader than whatever the regulators put on you. Because this is new technology. Nobody is an expert. So don't just play with it because you've got a new shiny toy to play with. Play with it in a responsible way that will help benefit the majority of people that support your organization.

  5. One thing I loved about this book was how thoroughly you documented your sources. If you were to recommend one of your sources as a next read for someone who’s read The AI Dilemma and wants to learn more, which one would it be?

    I would recommend Joy Buolamwini’s book, Unmasking AI. I wrote about Joy in my book. She taught me a lot, and I've never even met her. She did her PhD at MIT, studying robotics. She literally had to wear a white mask on her face for her robot that she had created to recognize her. Her book came out on October 31st. I haven't read it yet, but her story is amazing.

  • Facebook.
  • Twitter.
  • LinkedIn.
  • Print
close
Please enter a valid e-mail address
Please enter a valid e-mail address
Important legal information about the e-mail you will be sending. By using this service, you agree to input your real e-mail address and only send it to people you know. It is a violation of law in some jurisdictions to falsely identify yourself in an e-mail. All information you provide will be used by Fidelity solely for the purpose of sending the e-mail on your behalf.The subject line of the e-mail you send will be "Fidelity.com: "

Your e-mail has been sent.
close

Your e-mail has been sent.

Related Articles

Artificial Intelligence
By: Sarah Hoffman | December 6, 2023
Sarah Hoffman, VP of AI and Machine Learning Research in FCAT speaks with Juliette Powell, a researcher, entrepreneur, and keynote speaker at the intersection of technology and business, about how organizations can embrace responsible technology.
12/06/2023
Article
Artificial Intelligence
By: Sarah Hoffman | November 29, 2023
Generative AI still shows great promise, but its transformational power for large enterprises has so far been limited. While there are numerous challenges, inaccuracy, cybersecurity, intellectual-property infringement, and regulatory compliance are the four most commonly cited risks that organizations are working to mitigate. The good news: solutions are coming that will make it easier for enterprises to move forward over the coming year.
11/27/2023
Article
Artificial Intelligence
By: FCAT Quantum Incubator | November 21, 2023
Artificial Intelligence (AI) has become an integral part of our lives, shaping industries ranging from healthcare to finance and everything in between. However, as AI systems become increasingly complex, it becomes crucial to understand how they reach their decisions. This is where Explainable AI (XAI) and Simple Rules come into play. In this article, we will explore the concept of XAI and delve into the use of expressive Boolean formulas to make AI more transparent and interpretable.
11/21/2023
Article

This website is operated by Fidelity Center for Applied Technology (FCAT)® which is part of Fidelity Labs, LLC (“Fidelity Labs”), a Fidelity Investments company. FCAT experiments with and provides innovative products, services, content and tools, as a service to its affiliates and as a subsidiary of FMR LLC. Based on user reaction and input, FCAT is better able to engage in technology research and planning for the Fidelity family of companies. FCATalyst.com is independent of fidelity.com. Unless otherwise indicated, the information and items published on this web site are provided by FCAT and are not intended to provide tax, legal, insurance or investment advice and should not be construed as an offer to sell, a solicitation of an offer to buy, or a recommendation for any security by any Fidelity entity or any third-party. In circumstances where FCAT is making available either a product or service of an affiliate through this site, the affiliated company will be identified. Third party trademarks appearing herein are the property of their respective owners. All other trademarks are the property of FMR LLC.


This is for persons in the U.S. only.


245 Summer St, Boston MA

© 2008-2024 FMR LLC All right reserved | FCATalyst.com


Terms of Use | Privacy | Security | DAT Support