Abstract
In this paper we explicate the general concept of ‘an explanation’ and show that because there are many kinds of explanation there must be many kinds of ‘explainability’. Subsequently we analyse the types of explanations we can give of Artificial Intelligence (AI) systems and their output using current explainability methods, and then discuss what types of explanation patients are likely to seek as part of the diagnostic process or as part of choice of therapy. We argue that the types of explanation that is provided by current AI explainability methods do not adequately answer many reasonable requests for explanation that patients can make when their diagnosis or treatment choice has involved the use of AI advice.
Original language | English |
---|---|
Pages (from-to) | 8-14 |
Journal | Journal of Applied Ethics and Philosophy |
Volume | 16 |
DOIs | |
Publication status | Published - 28 Mar 2025 |