The Limits of Explainability in Health AI - Why Current Concepts of AI Explainability Cannot Accommodate Patient Interests

Thomas Ploug, Søren Holm*

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

In this paper we explicate the general concept of ‘an explanation’ and show that because there are many kinds of explanation there must be many kinds of ‘explainability’. Subsequently we analyse the types of explanations we can give of Artificial Intelligence (AI) systems and their output using current explainability methods, and then discuss what types of explanation patients are likely to seek as part of the diagnostic process or as part of choice of therapy. We argue that the types of explanation that is provided by current AI explainability methods do not adequately answer many reasonable requests for explanation that patients can make when their diagnosis or treatment choice has involved the use of AI advice.
Original languageEnglish
Pages (from-to)8-14
JournalJournal of Applied Ethics and Philosophy
Volume16
DOIs
Publication statusPublished - 28 Mar 2025

Fingerprint

Dive into the research topics of 'The Limits of Explainability in Health AI - Why Current Concepts of AI Explainability Cannot Accommodate Patient Interests'. Together they form a unique fingerprint.

Cite this