LIPEx-Locally Interpretable Probabilistic Explanations-To Look Beyond The True Class

Research output: Working paperPreprint

4 Downloads (Pure)

Abstract

In this work, we instantiate a novel perturbation-based multi-class explanation framework, LIPEx (Locally Interpretable Probabilistic Explanation). We demonstrate that LIPEx not only locally replicates the probability distributions output by the widely used complex classification models but also provides insight into how every feature deemed to be important affects the prediction probability for each of the possible classes. We achieve this by defining the explanation as a matrix obtained via regression with respect to the Hellinger distance in the space of probability distributions. Ablation tests on text and image data, show that LIPEx-guided removal of important features from the data causes more change in predictions for the underlying model than similar tests based on other saliency-based or feature importance-based Explainable AI (XAI) methods. It is also shown that compared to LIME, LIPEx is more data efficient in terms of using a lesser number of perturbations of the data to obtain a reliable explanation. This data-efficiency is seen to manifest as LIPEx being able to compute its explanation matrix around 53% faster than all-class LIME, for classification experiments with text data.
Original languageUndefined
Publication statusE-pub ahead of print - 7 Oct 2023

Keywords

  • XAI
  • probabilistic models

Research Beacons, Institutes and Platforms

  • Institute for Data Science and AI

Cite this