White Box XAI for AI Bias and Ethics
AI provides complex algorithms that can replace or emulate human intelligence. We tend to think that AI will spread unchecked by regulations. Without AI, corporate giants cannot process the huge amounts of data they face. In turn, ML algorithms require massive amounts of public and private data for training purposes to guarantee reliable results.
However, from a legal standpoint, AI remains a form of automatic processing of data. As such, just like any other method that processes data automatically, AI must follow the rules established by the international community, which compel AI designers to explain how decisions are reached. Explainable AI (XAI) has become a legal obligation.
The legal problem of AI worsens once we realize that for an algorithm to work, it requires data, that is, huge volumes of data. Collecting data requires access to networks, emails, text messages, social networks, hard disks, and more. By its very nature...
Moral AI bias in self-driving cars
In this section, we will explain AI bias, morals, and ethics. Explaining AI goes well beyond understanding how an AI algorithm works from a mathematical point of view to reach a given decision. Explaining AI includes defining the limits of AI algorithms in terms of bias, moral, and ethical parameters. We will use AI in SDCs to illustrate these terms and the concepts they convey.
The goal of this section is to explain AI, not to advocate the use of SDCs, which remains a personal choice, or to judge a human driver's decisions made in life and death situations.
Explaining does not mean judging. XAI provides us with the information we need to make our decisions and form our own opinions.
This section will not provide moral guidelines. Moral guidelines depend on cultures and individuals. However, we will explore situations that require moral judgments and decisions, which will take us to the very limits of AI and XAI.
We will provide...
Standard explanation of autopilot decision trees
An SDC contains an autopilot that was designed with several artificial intelligence algorithms. Almost all AI algorithms can apply to an autopilot's need, such as clustering algorithms, regression, and classification. Reinforcement learning and deep learning provide many powerful calculations.
We will first build an autopilot decision tree for our SDC. The decision tree will be applied to a life and death decision-making process.
Let's start by first describing the dilemma from a machine learning algorithm's perspective.
The SDC autopilot dilemma
The decision tree we are going to create will be able to reproduce an SDC's autopilot trolley problem dilemma. We will adapt to the life and death dilemma in the Moral AI bias in self-driving cars section of this chapter.
The decision tree will have to decide if it stays in the right lane or swerves over to the left lane. We will restrict our experiment...
XAI applied to an autopilot decision tree
In this section, we will explain decision trees through scikit-learn's
tree module, the decision tree classifier's parameters, and decision tree graphs. The goal is to provide the user with a step-by-step method to explain decision trees.
We will begin by parsing the structure of a decision tree.
Structure of a decision tree
The structure of a decision tree provides precious information for XAI. However, the default values of the decision tree classifier produce confusing outputs. We will first generate a decision tree structure with the default values. Then, we will use a what-if approach that will prepare us for the XAI tools in Chapter 5, Building an Explainable AI Solution from Scratch.
Let's start by implementing the default decision tree structure's output.
The default output of the default structure of a decision tree
Using XAI and ethics to control a decision tree
We know that the autopilot will have to decide to stay in a lane or swerve over to another lane to minimize the moral decision of killing pedestrians or not. The decision model is trained, tested, and the structure has been analyzed. Now it's time to put the decision tree on the road with the autopilot. Whatever algorithm you try to use, you will face the moral limits of a life and death situation. If an SDC faces such a vital decision, it might kill somebody no matter what algorithm or ensemble of algorithms the autopilot has.
Should we let an autopilot drive a car? Should we forbid the use of autopilots? Should we find ways to alert the driver that the autopilot will be shut down in such a situation? If the autopilot is shut down, will the human driver have enough time to take over before hitting a pedestrian?
In this section, we will introduce real-life bias, moral, and ethical issues in the decision tree to measure...
This chapter approached XAI using moral, technical, ethical, and bias perspectives.
The trolley problem transposed to SDC autopilot ML algorithms challenges automatic decision-making processes. In life and death situations, a human driver faces near-impossible decisions. Human artificial intelligence algorithm designers must find ways to make autopilots as reliable as possible.
Decision trees provide efficient solutions for SDC autopilots. We saw that a standard approach to designing and explaining decision trees provides useful information. However, it isn't enough to understand the decision trees in depth.
XAI encourages us to go further and analyze the structure of decision trees. We explored the many options to explain how decision trees work. We were able to analyze the decision-making process of a decision tree level by level. We then displayed the graph of the decision tree step by step.
Still, that was insufficient in finding a way to minimize...
- The autopilot of an SDC can override traffic regulations. (True|False)
- The autopilot of an SDC should always be activated. (True|False)
- The structure of a decision tree can be controlled for XAI. (True|False)
- A well-trained decision tree will always produce a good result with live data. (True|False)
- A decision tree uses a set of hardcoded rules to classify data. (True|False)
- A binary decision tree can classify more than two classes. (True|False)
- The graph of a decision tree can be controlled to help explain the algorithm. (True|False)
- The trolley problem is an optimizing algorithm for trollies. (True|False)
- A machine should not be allowed to decide whether to kill somebody or not. (True|False)
- An autopilot should not be activated in heavy traffic until it's totally reliable. (True|False)
- MIT's Moral Machine: http://moralmachine.mit.edu/
- Scikit-learn's documentation: https://scikit-learn.org/stable/auto_examples/tree/plot_unveil_tree_structure.html
- For more on a decision tree structure, you can visit https://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html#sklearn.tree.DecisionTreeClassifier
- For more on plotting decision trees, browse https://scikit-learn.org/stable/modules/generated/sklearn.tree.plot_tree.html
- For more on MIT's Moral Machine, please refer to The Moral Machine experiment. E. Awad, S. Dsouza, R. Kim, J. Schulz, J. Henrich, A. Shariff, J.-F. Bonnefon, I. Rahwan (2018). Nature.