One of the great challenges of Machine Learning is to produce systems capable of explaining their decisions and actions to human users.
Thus, the explicability of Artificial Intelligences (XAI) has become one of the challenges of Machine Learning and is now an integral part of the job of Data Scientists, who are called upon to convince users of the acceptability of their models' reasoning.
What is the explicability of a model? (XAI)
XAI, or “eXplainable Artificial Intelligence“, refers to a set of processes and methods used to explain every result calculated by an Artificial Intelligence in a simple, understandable way. It’s a field of Machine Learning that aims to justify as precisely as possible a result given by a model.
In effect, this AI helps anyone who is not a technical specialist or Data Scientist to understand why an algorithm has given such results.
For example, if you’ve trained a Machine Learning (ML) model with financial data to guide an investor in choosing an investment sector. With explainable AI you are then able to explain why one option was selected over another, and why the recommended investment options are best suited to his situation.
XAI, Why explainability?
Explainable AI is essential, both for developers and data scientists, and for users, for several good reasons:
- Enabling developers to update and improve models, as well as measure their effectiveness.
- Since the implementation of the GDPR it has been mandatory to be able to explain all results (scores, segmentation, etc.) that relate to individuals. This is now possible thanks to XAI.
- Bring more concrete results by providing valuable information on a company’s key indicators.
- Understanding the “why” will enable you to make better use of the results to prepare and adapt your presentation.
- For example, in the case of a telephone operator, an advisor will need to address a customer who is likely to cancel his contract because he is looking for a cheaper offer, differently from a customer who is about to move house.
- Understanding the reasoning behind algorithms will enable end-users to realize that they are based on logical principles.
Methods for implementing XAI
There are a number of ways to bring transparency and understanding to Artificial Intelligence. The main approaches are as follows.
Layer-wise Relevance Propagation (LRP) :
This is a technology that enables the specific features of input vectors to be defined.
It is applicable to neural network models, where inputs can be images, videos or text.
Counterfactual Method :
A counterfactual explanation describes a causal situation in the form: “If X had not happened, Y would not have happened“.
This technology refers to the way in which data inputs are modified once a result has been obtained. The extent to which the result has been modified is then observed.
Local Interpretable Model-Agnostic Explanations (LIME) :
LIME is a method based on a globalist approach that aims to explain any mechanical classifier and the resulting prognoses. It is one of the so-called local methods, in the sense that it gives explanations of model choice on a value-by-value basis, rather than globally over an entire dataset. With such a method, even non-specialists can access the data and the methods.
Rationalisation :
This is a method used especially for AI-based robots such as ChatGPT. Here, the machine is given autonomy to explain its actions.
Conclusion about XAI
The transparent nature of explainable AI offers numerous advantages for decision-making. XAI helps interpret the complex results produced by Machine Learning models. It also enables developers to update and improve models. It also offers the added advantage of good data traceability.