2 White Box XAI for AI Bias and Ethics
AI provides complex algorithms that can replace or emulate human intelligence. We tend to think that AI will spread unchecked by regulations. Without AI, corporate giants cannot process the huge amounts of data they face. In turn, ML algorithms require massive amounts of public and private data for training purposes to guarantee reliable results.
However, from a legal standpoint, AI remains a form of automatic processing of data. As such, just like any other method that processes data automatically, AI must follow the rules established by the international community, which compel AI designers to explain how decisions are reached. Explainable AI (XAI) has become a legal obligation.
The legal problem of AI worsens once we realize that for an algorithm to work, it requires data, that is, huge volumes of data. Collecting data requires access to networks, emails, text messages, social networks, hard disks, and more. By its very nature, processing data automatically requires access to that same data. On top of that, users want quick access to web sites that are increasingly driven by AI.
We need to add ethical rules to our AI algorithms to avoid being slowed down by regulations and fines. We must be ready to explain the alleged bias in our ML algorithms. Any company, whether huge or tiny, can be sued and fined.
This puts huge pressure on online sites to respect ethical regulations and avoid bias. In 2019, the U.S. Federal Trade Commission (FTC) and Google settled for USD 170,000,000 for YouTube's alleged violations of children's privacy laws. In the same year, Facebook was fined USD 5,000,000,000 by FTC for violating privacy laws. On January 21, 2019, a French court applied the European General Data Protection Regulation (GDPR) and sentenced Google to pay a €50,000,000 fine for a lack of transparency in their advertising process. Google and Facebook stand out because they are well known. However, every company faces these issues.
The roadmap of this chapter becomes clear. We will determine how to approach AI ethically. We will explain the risk of bias in our algorithms when an issue comes up. We will apply explainable AI as much as possible, and as best as we possibly can.
We will start by judging an autopilot in a self-driving car (SDC) in life-and-death situations. We will try to find how an SDC driven by an autopilot can avoid killing people in critical traffic cases.
With these life-and-death situations in mind, we will build a decision tree in an SDC's autopilot. Then, we will apply an explainable AI approach to decision trees.
Finally, we will learn how to control bias and insert ethical rules in real time in the SDC's autopilot.
This chapter covers the following topics:
- The Moral Machine, Massachusetts Institute of Technology (MIT)
- Life and death autopilot decision making
- The ethics of explaining the moral limits of AI
- An explanation of autopilot decision trees
- A theoretical description of decision tree classifiers
- XAI applied to an autopilot decision tree
- The structure of a decision tree
- Using XAI and ethics to control a decision tree
- Real-time autopilot situations
Our first step will be to explore the challenges an autopilot in an SDC faces in life and death situations.