Hands-On Explainable AI(XAI) with Python
上QQ阅读APP看书,第一时间看更新

Designing and extracting

Extracting data from the outputs of an AI model is one way of providing XAI data. Another approach consists of designing outputs at each phase of an AI solution from the start. Explainable components can be designed for the inputs, the model, the outputs, the events occurring when the AI model is in production, and the accountability requirements.

We need an XAI executive function to visualize how explainable models fit into each phase of an AI process.

The XAI executive function

Our executive function in everyday life includes our way of thinking and managing our activities. We can follow directions and focus on certain things, for example, using our executive function.

Our brain uses an executive function to control our cognitive processes. AI project managers use executive functions to monitor all of the phases of an AI system.

A representation of XAI through an executive function will help you make your way through the many ways to implement XAI.

One question that must guide you at all moments when implementing XAI is:

Can your AI program be trusted?

A basic rule to remember is that when a problem comes up in an AI program that you are related to in one way or another, you are on your own. The partial or total responsibility of explaining will be on you.

You cannot afford to miss any aspect of XAI. One omission, and critical errors will go unexplained. You can lose the trust of users in a few hours after having worked for months on an AI project.

The first step is to represent the different areas you will have to apply XAI to in a chart that goes from development to production and accountability, as shown in the following example:

Figure 1.4: Executive function chart

You can implement XAI at every stage of an AI project, as shown in the chart:

  • Development, input: By making key aspects of the data available to analyze the AI process
  • Development, model: By making the logic of an AI model explainable and understandable
  • Development, output: By displaying the output in various ways and from different angles
  • Production: By explaining how the AI model reached a result with all of the development XAI tools
  • Accountability: By explaining exactly how a result was reached, starting from the first step of the process to the user interface

Note that in the chart development phase, XAI functions need to be activated by support requests once the AI program is in production for XAI, maintenance, and support.

Also, you can see that a service-level agreement (SLA) for XAI can be required in your AI contract with your customer or end user. If your SLA requires you to fix an AI program within an hour, for example, and no developer is present to explain the code, it is recommended to have intuitive XAI interfaces!

The word "intuitive" has opened the door to the many profiles of people that will need to use XAI at different times for different reasons.

Let's list a few examples of XAI approaches:

  • Intuitive: The XAI interface must be understandable at a glance with no detailed explanations.
  • Expert: Precise information is required, such as the description of a machine learning equation.
  • Implicit: An expert that masters a subject just needs a hint to understand the AI model.
  • Explicit: A user might want a detailed explanation but not at the expert level, for example.
  • Subjective: A manager might just want a group of users to explain how they view an AI model.
  • Objective: A manager might want the developers to produce XAI to confirm a subjective view.
  • Explaining: An AI model can be explained using simple natural language explanations.
  • AI to explain: Other AI models can be used to analyze AI outputs.

Let's sum this up in an executive function table. In this table, each letter has the following meaning:

  • D stands for a development XAI request.
  • P stands for a production XAI request.
  • A stands for an accountability XAI request.

Each XAI request is followed by an alert level from 1 to 10. 1 is a low level, and 10 is a high level of alert. For example, D(1) means that a small XAI module is required.

The following table provides a few examples of how to use it in a given situation:

The following explanations are simply examples to show you the tremendous number of possibilities we can encounter when implementing XAI:

  • D(3)-A(9): A legal team of users of an AI program requests input dataset XAI by a developer.
  • P(9)-D(1): A user rejects a result and asks for a developer to activate the XAI interface.
  • P(7): A group of users in production requests the XAI interface to be activated to explain results.
  • A(10): The legal team is facing an investigation into its privacy policy and requires simple Excel type queries to provide the required explanations. No AI is required to perform the XAI tasks.

These examples show the huge number of combinations reached between the different XAI approaches in the eight lines of the table and the five phases of an AI and XAI project. This adds up to more than 10 elements to consider. This represents n possibilities in an average of 10 elements. The number of combinations of 5 elements among 10 already represents 252 scenarios to design and implement. It could be a problem from input to production (4) involving an expert (1) XAI interface for each phase. It is impossible to design all the possibilities separately.

In this chapter, we will pe directly into an XAI project within a medical diagnosis timeline.