Hands-On Explainable AI (XAI) with Python

By Denis Rothman
    Advance your knowledge in tech with a Packt subscription

  • Instant online access to over 7,500+ books and videos
  • Constantly updated with 100+ new titles each month
  • Breadth and depth in over 1,000+ technologies
  1. White Box XAI for AI Bias and Ethics

About this book

Effectively translating AI insights to business stakeholders requires careful planning, design, and visualization choices. Describing the problem, the model, and the relationships among variables and their findings are often subtle, surprising, and technically complex.

Hands-On Explainable AI (XAI) with Python will see you work with specific hands-on machine learning Python projects that are strategically arranged to enhance your grasp on AI results analysis. You will be building models, interpreting results with visualizations, and integrating XAI reporting tools and different applications.

You will build XAI solutions in Python, TensorFlow 2, Google Cloud’s XAI platform, Google Colaboratory, and other frameworks to open up the black box of machine learning models. The book will introduce you to several open-source XAI tools for Python that can be used throughout the machine learning project life cycle.

You will learn how to explore machine learning model results, review key influencing variables and variable relationships, detect and handle bias and ethics issues, and integrate predictions using Python along with supporting the visualization of machine learning models into user explainable interfaces.

By the end of this AI book, you will possess an in-depth understanding of the core concepts of XAI.

Publication date:
July 2020
Publisher
Packt
Pages
454
ISBN
9781800208131

 

White Box XAI for AI Bias and Ethics

AI provides complex algorithms that can replace or emulate human intelligence. We tend to think that AI will spread unchecked by regulations. Without AI, corporate giants cannot process the huge amounts of data they face. In turn, ML algorithms require massive amounts of public and private data for training purposes to guarantee reliable results.

However, from a legal standpoint, AI remains a form of automatic processing of data. As such, just like any other method that processes data automatically, AI must follow the rules established by the international community, which compel AI designers to explain how decisions are reached. Explainable AI (XAI) has become a legal obligation.

The legal problem of AI worsens once we realize that for an algorithm to work, it requires data, that is, huge volumes of data. Collecting data requires access to networks, emails, text messages, social networks, hard disks, and more. By its very nature...

 

Moral AI bias in self-driving cars

In this section, we will explain AI bias, morals, and ethics. Explaining AI goes well beyond understanding how an AI algorithm works from a mathematical point of view to reach a given decision. Explaining AI includes defining the limits of AI algorithms in terms of bias, moral, and ethical parameters. We will use AI in SDCs to illustrate these terms and the concepts they convey.

The goal of this section is to explain AI, not to advocate the use of SDCs, which remains a personal choice, or to judge a human driver's decisions made in life and death situations.

Explaining does not mean judging. XAI provides us with the information we need to make our decisions and form our own opinions.

This section will not provide moral guidelines. Moral guidelines depend on cultures and individuals. However, we will explore situations that require moral judgments and decisions, which will take us to the very limits of AI and XAI.

We will provide...

 

Standard explanation of autopilot decision trees

An SDC contains an autopilot that was designed with several artificial intelligence algorithms. Almost all AI algorithms can apply to an autopilot's need, such as clustering algorithms, regression, and classification. Reinforcement learning and deep learning provide many powerful calculations.

We will first build an autopilot decision tree for our SDC. The decision tree will be applied to a life and death decision-making process.

Let's start by first describing the dilemma from a machine learning algorithm's perspective.

The SDC autopilot dilemma

The decision tree we are going to create will be able to reproduce an SDC's autopilot trolley problem dilemma. We will adapt to the life and death dilemma in the Moral AI bias in self-driving cars section of this chapter.

The decision tree will have to decide if it stays in the right lane or swerves over to the left lane. We will restrict our experiment...

 

XAI applied to an autopilot decision tree

In this section, we will explain decision trees through scikit-learn's tree module, the decision tree classifier's parameters, and decision tree graphs. The goal is to provide the user with a step-by-step method to explain decision trees.

We will begin by parsing the structure of a decision tree.

Structure of a decision tree

The structure of a decision tree provides precious information for XAI. However, the default values of the decision tree classifier produce confusing outputs. We will first generate a decision tree structure with the default values. Then, we will use a what-if approach that will prepare us for the XAI tools in Chapter 5, Building an Explainable AI Solution from Scratch.

Let's start by implementing the default decision tree structure's output.

The default output of the default structure of a decision tree

The decision tree estimator contains a tree_ object that stores the...

 

Using XAI and ethics to control a decision tree

We know that the autopilot will have to decide to stay in a lane or swerve over to another lane to minimize the moral decision of killing pedestrians or not. The decision model is trained, tested, and the structure has been analyzed. Now it's time to put the decision tree on the road with the autopilot. Whatever algorithm you try to use, you will face the moral limits of a life and death situation. If an SDC faces such a vital decision, it might kill somebody no matter what algorithm or ensemble of algorithms the autopilot has.

Should we let an autopilot drive a car? Should we forbid the use of autopilots? Should we find ways to alert the driver that the autopilot will be shut down in such a situation? If the autopilot is shut down, will the human driver have enough time to take over before hitting a pedestrian?

In this section, we will introduce real-life bias, moral, and ethical issues in the decision tree to measure...

 

Summary

This chapter approached XAI using moral, technical, ethical, and bias perspectives.

The trolley problem transposed to SDC autopilot ML algorithms challenges automatic decision-making processes. In life and death situations, a human driver faces near-impossible decisions. Human artificial intelligence algorithm designers must find ways to make autopilots as reliable as possible.

Decision trees provide efficient solutions for SDC autopilots. We saw that a standard approach to designing and explaining decision trees provides useful information. However, it isn't enough to understand the decision trees in depth.

XAI encourages us to go further and analyze the structure of decision trees. We explored the many options to explain how decision trees work. We were able to analyze the decision-making process of a decision tree level by level. We then displayed the graph of the decision tree step by step.

Still, that was insufficient in finding a way to minimize...

 

Questions

  1. The autopilot of an SDC can override traffic regulations. (True|False)
  2. The autopilot of an SDC should always be activated. (True|False)
  3. The structure of a decision tree can be controlled for XAI. (True|False)
  4. A well-trained decision tree will always produce a good result with live data. (True|False)
  5. A decision tree uses a set of hardcoded rules to classify data. (True|False)
  6. A binary decision tree can classify more than two classes. (True|False)
  7. The graph of a decision tree can be controlled to help explain the algorithm. (True|False)
  8. The trolley problem is an optimizing algorithm for trollies. (True|False)
  9. A machine should not be allowed to decide whether to kill somebody or not. (True|False)
  10. An autopilot should not be activated in heavy traffic until it's totally reliable. (True|False)
   

Further reading

About the Author

  • Denis Rothman

    Denis Rothman graduated from Sorbonne University and Paris-Diderot University, patenting one of the very first word2matrix embedding solutions. Denis Rothman is the author of three cutting-edge AI solutions: one of the first AI cognitive chatbots more than 30 years ago; a profit-orientated AI resource optimizing system; and an AI APS (Advanced Planning and Scheduling) solution based on cognitive patterns used worldwide in aerospace, rail, energy, apparel, and many other fields. Designed initially as a cognitive AI bot for IBM, it then went on to become a robust APS solution used to this day.

    Browse publications by this author
Hands-On Explainable AI (XAI) with Python
Unlock this book and the full library for FREE
Start free trial