Personalized uncertainty quantification in artificial intelligence

Tapabrata Chakraborti, Christopher R. S. Banerji, Ariane Marandon, Vicky Hellon, Robin Mitra, Brieuc Lehmann, Leandra Brauninger, Sarah McGough, Cagatay Turkay, Alejandro F. Frangi, Ginestra Bianconi, Weizi Li, Owen Rackham, Deepak Parashar, Chris Harbron, Ben MacArthur

Research output: Contribution to journalReview articlepeer-review

Abstract

Artificial intelligence (AI) tools are increasingly being used to help make consequential decisions about individuals. While AI models may be accurate on average, they can simultaneously be highly uncertain about outcomes associated with specific individuals or groups of individuals. For high-stakes applications (such as healthcare and medicine, defence and security, banking and finance), AI decision-support systems must be able to make personalized assessments of uncertainty in a rigorous manner. However, the statistical frameworks needed to do so are currently incomplete. Here, we outline current approaches to personalized uncertainty quantification (PUQ) and define a set of grand challenges associated with the development and use of PUQ in a range of areas, including multimodal AI, explainable AI, generative AI and AI fairness.
Original languageEnglish
Pages (from-to)522-530
Number of pages9
JournalNature Machine Intelligence
Volume7
Issue number4
DOIs
Publication statusPublished - 23 Apr 2025

Fingerprint

Dive into the research topics of 'Personalized uncertainty quantification in artificial intelligence'. Together they form a unique fingerprint.

Cite this