Model-agnostic variable importance for predictive uncertainty: an entropy-based approach

Danny Wood, Theodore Papamarkou, Matt Benatan, Richard Allmendinger

Research output: Contribution to journalArticlepeer-review

23 Downloads (Pure)

Abstract

In order to trust the predictions of a machine learning algorithm, it is necessary to understand the factors that contribute to those predictions. In the case of probabilistic and uncertainty-aware models, it is necessary to understand not only the reasons for the predictions themselves, but also the model's level of confidence in those predictions. In this paper, we show how existing methods in explainability can be extended to uncertainty-aware models and how such extensions can be used to understand the sources of uncertainty in a model's predictive distribution. In particular, by adapting permutation feature importance, partial dependence plots, and individual conditional expectation plots, we demonstrate that novel insights into model behaviour may be obtained and that these methods can be used to measure the impact of features on both the entropy of the predictive distribution and the log-likelihood of the ground truth labels under that distribution. With experiments using both synthetic and real-world data, we demonstrate the utility of these approaches in understanding both the sources of uncertainty and their impact on model performance.
Original languageEnglish
JournalData Mining and Knowledge Discovery
Early online date29 Aug 2024
DOIs
Publication statusPublished - 29 Aug 2024

Keywords

  • Entropy
  • feature importance
  • predictive uncertainty
  • variable importance

Fingerprint

Dive into the research topics of 'Model-agnostic variable importance for predictive uncertainty: an entropy-based approach'. Together they form a unique fingerprint.

Cite this