Skip to content
Explainable Artificial Intelligence (XAI) and its real-world applications

Explainable Artificial Intelligence (XAI) and its real-world applications

Dr. Kumar Dheenadayalan, Dr. Sumant Kulkarni

Read time: 5 Mins

With the commercialization of AI solutions, the inherent research bias of researchers has cropped up in the way they build the model. Right from the decision of choosing a particular model and throughout the modeling process, the biases of the person developing the model get deeply embedded in the final solution delivered.

Detecting research bias is a minor aspect that can be addressed through a sensible research methodology. But the inherent black-box structure, which is leading to better predictive performance, can become an issue in high-stakes decision making if suitable explanations aren’t available regarding the decisions made. It’s important to know when a model will succeed and when it will fail, why the model is making any specific prediction and to what extent is the model reliable. Answering these important questions will lead to increased trust in AI solutions in critical decision-making applications.

Explainability shouldn’t be used interchangeably with interpretably, as there is a subtle but important difference between the two. The technical difference between the two needs to be understood by every data scientist. Interpretable models are a priori interpretable by humans and can reproduce the results generated by models. Whereas, an XAI model will first involve building a black box model and, in parallel, building a post hoc explanation as to why the model ended up in the state it is in. Interpretability helps humans understand the cause and effect of a model prediction whereas XAI helps to dissect the internal mechanics of a black-box to understand the importance of various features and the decisions it can lead to.

To unravel the subtle difference, let’s consider a highly interpretable model like a decision tree. The rules generated by following the path from the root node to the leaf can give us the cause of a particular prediction and the same rule can be used by humans to make predictions for all subsequent inputs. In the case of a black box model, like a Convolutional Neural Network (CNN) for classifying animals, explainability will play a key role in understanding how CNN is differentiating between animals. For example, the shape of the nose can have very high activation when differentiating between a cat and a dog. XAI is a post-hoc analysis that helps verify whether the basis of decision-making (shape of the nose, in this case) is in line with the way humans explain the difference.

DARPA had initiated a project half a decade ago, back in 2016, to go beyond interpretability and explore the world of XAI that specifically deals with deep learning models.

The emerging interest in explainable AI (XAI) has been driven by two factors:

  • The criticism of not having a robust post hoc analysis for AI models, which led to a lack of trust in AI.
  • Wider implementation of regulations like GDPR.

In the paper titled “Why Should I Trust You? Explaining the Predictions of Any Classifier”, the authors (Ribeiro,) proposed the first and the most widely used XAI method called Local Interpretable Model-agnostic Explanations (LIME). The application of LIME on images classifying a husky as a wolf throws great insights into why explainability is key to prevent sub-standard learning. The paper demonstrates how a model learned the existence of snow in the image as a major factor for classifying the animal as a wolf. Such explanations can easily help us in identifying flawed learning and, in turn, help in preventing the bias in data as well as modeling.

Another very interesting read and a motivating example that highlights the bugs in a learnt AI model was demonstrated by Lapuschkin in a paper titled “Unmasking   Clever   Hans predictors and assessing what machines really learn”, published in Nature Communications, which showed that a classifier learnt the existence of a copyright tag at a corner of the image as the main feature to classify an image as a horse. Even though it is a flaw in the data collection and pre-processing steps, such bugs usually go undetected. In high stakes scenarios, like classification of cancer or tumor using medical imaging, such bugs are highly undesirable.

So, is XAI always necessary? The answer is no. But it is definitely necessary for applications where the potential consequences are critical.

Moving beyond the motivating image classification example, let’s now look at some high stakes industries where XAI can have a positive impact.

Healthcare – The potential benefits of AI in the field of healthcare is high and the risk associated with an untrustworthy AI system is even higher. It goes without saying that the decisions made by AI models to assist doctors in classifying critical diseases, using structured parameters or unstructured data like medical imaging, have far-reaching consequences. If an AI system predicts and, at the same time, explains the reasons for its conclusion, it will be much more valuable than a system that predicts and let the doctors spend an equal amount of time (with or without AI decisions) figuring out if the decision made by the AI system is accurate and trustworthy. In healthcare, lives are at stake and hence XAI has utmost relevance.

BFSI – BFSI has a lot to gain and XAI can revolutionize this industry. Credit risk assessment using AI has been widely adopted in banking and insurance. Premium estimation based on multimodal information is picking up pace in developed countries with pay-as-you-drive and pay-how-you-drive models, that use machine learning for decision making. However, the widespread awareness in developed countries on preventing misuse of data has brought in concrete regulations like (EU Data Protection Regulation) GDPR in which Article 22 talks about restrictions on fully automated decision making and Article 13-15 talks about the right to seek explanations (though not explicitly worded) for decisions made. This has a significant impact as the AI systems which are usually employed in risk assessment, premium estimation and other decisions are black-boxed models. XAI systems that are capable of providing superlative results along with comprehensible explanations, would build enough trust as well as satisfy the regulatory requirements which can lead to better adoption of AI solutions in the industry. Denying a loan, inflating/deflating premium prices for health/motor insurance, or wrong suggestions in stock trading have higher financial stakes and, hence, XAI can be envisioned to be a valuable addition in explaining such decisions to the affected party.

Automobiles – Autonomous driving has been an evolving theme and is the future of the automobile industry. Driverless cars or self-driving cars are fascinating, as long as there is no wrong move made. One wrong move can cost one or more lives in this high-stake application of AI. Explainability is the key to understand the capabilities and limitations of the system before deployment.  Understanding the shortcoming of driving assistance (or auto-pilot) on the field when used by customers is important so that the reasons for shortcomings can be assessed, explained and fixed on priority. To a large extent, assistive parking and voice assistants are attractive features, that consist of relatively low-risk decisions made by the model. But in other cases, like braking assistance or self-driving in itself, XAI becomes the key in identifying the bias and fixing the same.

Judicial System – There is an increasing adoption of AI systems in decision-making in the judicial process in western countries. The inherent bias that can come with it towards a specific ethnic group has been extensively documented in the past by ProRepublica. The bias in AI applications, like granting parole based on the probability of repeat offense, has far reaching consequences and fairness in them is a must because it deals with the rights and liberties of an individual.

To conclude, XAI is definitely an area of research and development that is widely gaining pace. It is also extensively researched in ZenLabs, as it helps data scientists better understand their models and eliminate the bias that they might unconsciously embed in them. We are glad to note that many of our AI solutions from Zenlabs are inclined towards not only better predictive performance but are also coupled with a reasonable explainability. We use them to offer a position of advantage to our strong customer base in BFSI, Manufacturing and Retail.

Data Engineering & Analytics

Also read: