Sarah Hoffman: Why do you think the Moral Machine went viral? Why do you think this topic got the attention of so many people?
Jean-François Bonnefon: I think that for many people, the Moral Machine was the first time they realized how challenging moral dilemmas can be. Even people who knew about dilemmas such as 'is it ok to sacrifice one life to save five lives' and thought they had a response to that, realized that they had a hard time making decisions on the complex scenarios offered by the Moral Machine. And the Moral Machine had millions of scenarios to offer, which ensured that no two people saw the same ones. In retrospect, this played a big role in the virality of the Moral Machine, because it made it Youtube-friendly: everybody could post a different reaction video to the Moral Machine, because they all got different scenarios to react to.
SH: Is there anything you wish you had done differently when designing the Moral Machine?
JF: If I were to do it all over again, I would think harder about including a homeless person as one of the potential crash victims. We did this because we believed that, sadly, people would sacrifice the homeless character more; and we wanted to use this as an example of why you cannot simply follow the preferences of the crowd, given that some of these preferences are unethical. But as the Moral Machine went viral, we lost control about the way it was presented in traditional and social media; and in some occasions the presence of the homeless character was perceived as a sign that it was an actual possibility that self-driving cars could be programmed to crash into the poor. I don't like that the Moral Machine may have indirectly contributed to this belief.
SH: What are your next steps for this research?
JF: The crash scenarios in the Moral Machine were intentionally simplified, so that people would not have to absorb too much information about road safety statistics. This approach was instrumental in the success of the website, but a shortcoming was that the scenarios lacked realism. So we have been developing another version of the platform, which uses realistic scenarios and actual traffic statistics. While this data-collection platform is not as simple or engaging as the Moral Machine, its results will be more directly relevant to policymaking.
SH: Do you think there’s anything we can learn from all this beyond self-driving cars that may apply to other technologies or use cases?
JF: Beyond self-driving cars, the data we collected address one huge question, which is whether people put different values on different lives when you cannot save everyone; and as a corollary, which kind of policy they are more likely to accept if governments need to be explicit about the lives they prioritize in a crisis. We have been through this situation at least twice during the pandemic: when ventilators became scarce, many countries had to explain which patients would be prioritized; and when the first vaccines became available, governments had to explain who would be first in line to get a shot. In cases like these, the data of the Moral Machine or comparable studies can help anticipate how citizens will react to high-stake policies decided under close public scrutiny.
Sarah Hoffman leads AI and Machine Learning (ML) research for FCAT, helping the firm understand trends in these technologies and their potential impact on Fidelity.