JOHN DALTON: Your new book, Human-Centered AI, is the most balanced, pragmatic and optimistic analysis of artificial intelligence that I’ve read. You lay out a comprehensive guide to building reliable, safe, and trustworthy applications that feature both high levels of human control and high levels of automation. A critical part of your argument is that if we want to achieve a flourishing and humane future it’s essential for us to understand that computers are not in fact people, and vice versa. Why is clarifying the difference between humans and computer so important?
BEN SHNEIDERMAN: Some advocates of artificial intelligence promote the goal of human-like computers that match or exceed the full range of human abilities from thinking to consciousness. This vision attracts journalists who are eager to write about humanoid robots and contests between humans and computers. I consider these scenarios as misleading and counterproductive, diverting resources and effort from meaningful projects that amplify, augment, empower, and enhance human performance.
I respect and value the remarkable capabilities that humans have for individual insight, team coordination, and community building. I seek to build technologies that support human self-efficacy, creativity, responsibility, and social connectedness.
JOHN DALTON: We’re awash in news about automation that fails, involving everything from biased school admissions and credit applications to autonomous vehicles that kill. Even Boeing ran into challenges recently with the 737 MAX. Civil aviation has some of the most robust safety measures and standards in place. What can even those of us outside of the airline industry learn from tragedies like that?
BEN SHNEIDERMAN: The two Boeing 737 MAX crashes are a complex story, but one important aspect was the designers’ belief that they could create a fully autonomous system that was so reliable that the pilots were not even informed of its presence or activation. There was no obvious visual display to inform the pilots of the status, nor was there a control panel that would guide them to turn off the autonomous system. The lesson is that the excessive belief in machine autonomy can lead to deadly outcomes. When rapid performance is needed, high levels of automation are appropriate, but so are high levels of human independent oversight to track performance over the long-term and investigate failures.
JOHN DALTON: Your vision for the future is one in which AI systems augment, amplify and enhance our lives. Are there products and services out there today that you believe already do this?
BEN SHNEIDERMAN: Yes, the hugely successful digital cameras rely on high levels of AI for setting the focus, shutter speed, and color balance, while giving users control over the composition, zoom, and decisive moment when they take the photo. Similarly, navigation systems let users set the departure and destination, transportation mode, and departure time, then the AI algorithms provide recommended routes for users to select from as well as the capacity to change routes and destinations at will. Query completion, text auto-completion, spelling checkers, and grammar checkers all ensure human control while providing algorithmic support in graceful ways.
JOHN DALTON: As you point out in your book, there’s a lot of work to do before our design metaphors and governance structures support truly human-centered AI. What can we do to accelerate the adoption of HCAI?
BEN SHNEIDERMAN: Yes, it will take a long time to produce the changes that I envision, but our collective goals should be to reduce the time from 50 to 15 years. We can all begin by changing the terms and metaphors we use. Fresh sets of guidelines for writing about AI are emerging from several sources, but here is my draft offering:
- Clarify human initiative and control
- Give people credit for accomplishments
- Emphasize that computers are different from people
- Remember that people use technology to accomplish goals
- Recognize that human-like physical robots may be misleading
- Avoid using human verbs to describe computers
- Be aware that metaphors matter
- Clarify that people are responsible for use of technology
Another step will be revising the images of future technologies to replace humanoid robots with devices that are more like cars, elevators, thermostats, phones, and cameras.
John Dalton is VP Research in FCAT, where he investigates socioeconomic trends and engages in in-depth studies focused on emerging interfaces (augmented reality, virtual reality, speech, gesture, and biometrics).