Moral AI bias in self-driving cars
In this section, we will explain AI bias, morals, and ethics. Explaining AI goes well beyond understanding how an AI algorithm works from a mathematical point of view to reach a given decision. Explaining AI includes defining the limits of AI algorithms in terms of bias, moral, and ethical parameters. We will use AI in SDCs to illustrate these terms and the concepts they convey.
The goal of this section is to explain AI, not to advocate the use of SDCs, which remains a personal choice, or to judge a human driver's decisions made in life and death situations.
Explaining does not mean judging. XAI provides us with the information we need to make our decisions and form our own opinions.
This section will not provide moral guidelines. Moral guidelines depend on cultures and inpiduals. However, we will explore situations that require moral judgments and decisions, which will take us to the very limits of AI and XAI.
We will provide information for each person so that we can understand the complexity of the decisions autopilots face in critical situations.
We will start by ping directly into a complex situation for a vehicle on autopilot.
Life and death autopilot decision making
In this section, we will set the grounds for the explainable decision tree that we will implement in the subsequent sections. We will be facing life and death situations. We will have to analyze who might die in an accident that cannot be avoided.
This section uses MIT's Moral Machine experiment, which addresses the issue of how an AI machine will make moral decisions in life and death situations.
To understand the challenge facing AI, let's first go back to the trolley problem.
The trolley problem
The trolley problem takes us to the core of human decisions. Should we decide on a purely utilitarian basis, maximizing utility above anything else? Should we take deontological ethics—that is, actions based on moral rules—into account? The trolley problem, which was first expressed more than 100 years ago, creates a dilemma that remains difficult to solve since it leads to subjective cultural and personal considerations.
The trolley problem involves four protagonists:
- A runaway trolley going down a track: Its brakes have failed, and it is out of control.
- A group of five people at a short distance in front of the trolley: They will be killed if the trolley continues on its track.
- One person on a sidetrack.
- You, standing next to a lever: If you don't pull the lever, five people will die. If you pull the lever, one person will die. You only have a few seconds to make your decision.
In the following diagram, you can see the trolley on the left; you in the middle, next to the lever; the five people that will be killed if the trolley stays on the track; and the person on the sidetrack:
Figure 2.1: The trolley problem
!Original: McGeddonVector: Zapyon / CC BY-SA (https://creativecommons.org/licenses/by-sa/4.0)
This mind experiment consists of imagining numerous situations such as:
- Five older people on the track and one child on the sidetrack
- Five women on the track and one man on the sidetrack
- A young family on the track and an elderly person on the sidetrack
- Many more combinations
You must decide whether to pull the lever or not. You must determine who will die. Worse, you must decide how your ML autopilot algorithm will make decisions in an SDC when facing life and death situations.
Let's explore the basis of moral-based decision making in SDCs.
The MIT Moral Machine experiment
The MIT Moral Machine experiment addresses the trolley problem transposed into our modern world of self-driving autopilot algorithms. The Moral Machine experiment explored millions of answers online in many cultures. It presents situations you must judge. You can test this on the Moral Machine experiment site at http://moralmachine.mit.edu/.
The Moral Machine experiment confronts us with machine-made moral decisions. An autopilot running calculations does not think like a human in the trolley problem. An ML algorithm thinks in terms of the rules we set. But how can we set them if we do not know the answers ourselves? We must answer this question before putting our autopilot algorithms on the market. Hopefully, by the end of this chapter, you will have some ideas on how to limit such situations.
The Moral Machine experiment extends the trolley problem further. Should we pull the lever and stop implementing AI autopilot algorithms in cars until they are ready in a few decades?
If so, many people will die because a well-designed autopilot saves some lives but not others. An autopilot is never tired. It is alert and respects traffic regulations. However, autopilots will face the trolley problem and calculation inaccuracies.
If we do not pull the lever and let autopilots run artificial intelligence programs, autopilots will start making life and death decisions and will kill people along the way.
The goal of this section and chapter is not to explore every situation and suggest rules. Our primary goal is limited to explaining the issues and possibilities so that you can judge what is best for your algorithms. We will, therefore, be prepared to build a decision tree for the autopilot in the next section.
Let's now explore a life and death situation to prepare ourselves with the task of building a decision tree.
Real life and death situations
The situation described in this section will prepare us to design tradeoffs in moral decision making in autopilots. I used the MIT Moral Machine to create the two options an AI autopilot can take when faced with the situation shown in the following diagram:
Figure 2.2: A situation created using the MIT Moral Machine
For several possible reasons, the car shown in the preceding diagram cannot avoid either going straight ahead or changing lanes:
- Brake failure.
- The SDC's autopilot did not identify the pedestrians well or fast enough.
- The AI autopilot is confused.
- The pedestrians on the left side of the diagram suddenly crossed the road when the traffic light was red for pedestrians.
- It suddenly began to rain, and the autopilot failed to give the human driver enough time to react. Rain obstructs an SDC's sensors. The cameras and radars can malfunction in this case.
- Another reason.
Many factors can lead to this situation. We will, therefore, refer to this situation by stating that whatever the reason, the car does not have enough time to stop before reaching the traffic light.
We will try to provide the best answer to the situation described in the preceding diagram. We will now approach this from an ethical standpoint.
Explaining the moral limits of ethical AI
We now know that a life and death situation that involves deaths, no matter what decision is made, will be subjective in any case. It will depend more on cultural and human values than on pure ML calculations.
Ethically explaining AI involves honesty and transparency. By doing so, we are honest and transparent. We explain why we, as humans, struggle with this type of situation, and ML autopilots cannot do better.
To illustrate this, let's analyze a potential real-life situation.
On the left side of Figure 2.3, we see a car on autopilot using AI that is too close to a pedestrian walk to stop. We notice that the traffic light is green for the car and red for pedestrians. A death symbol is on the man, the woman, and the child because they will be seriously injured if the car hits them:
Figure 2.3: A life and death situation
A human driver could try to:
- Immediately turn the steering wheel to the left to hit the wall with the brakes on and stop the car. This manual maneuver automatically turns the autopilot off. Of course, the autopilot could be dysfunctional and not stop. Or, the car would continue anyway and hit the pedestrians.
- Immediately do the same maneuver but instead cross lanes to the other side. Of course, a fast incoming car could be coming in the opposite direction. The car could slide, miss the wall, and hit the child anyway on the other lane.
A human driver could try to change lanes to avoid the three pedestrians and only risk injuring or killing one on the other lane:
Figure 2.4: Swerving to another lane
The driver might be able to avoid the pedestrian on the other lane. In that case, it would be risking one life instead of three.
What would you do?
I do not think we can ask an AI program to make that choice. We need to explain why and find a way to offer a solution to this problem before letting driverless SDCs roam freely around a city!
Let's view this modern-day trolley problem dilemma from an ML perspective.