Interpretable artificial intelligence systems in medical imaging: Review and theoretical framework

Research output: Chapter in Book/Conference proceedingChapterpeer-review

Abstract

The development of Interpretable Artificial Intelligence (AI) has drawn substantial attention on the effect of AI on augmenting human decision-making. In this paper, we review the literature on medical imaging to develop a framework of Interpretable AI systems in enabling the diagnostic process. We identify three components as constituting Interpretable AI systems, namely, human agents, data, machine learning (ML) models, and discuss their classifications and dimensions. Using the workflow process of AI augmented breast screening in the UK as an example, we identify the possible tensions that may emerge as human agents work with ML models and data. We discuss how these tensions may impact the performance of Interpretable AI systems in the diagnostic process and conclude with implications for further research.
Original languageEnglish
Title of host publicationResearch Handbook on Artificial Intelligence and Decision Making in Organizations
EditorsIoanna Constantiou, Mayur P. Joshi, Marta Stelmaszak
Place of PublicationCheltenham
PublisherEdward Elgar
Chapter14
Pages240-265
Number of pages26
ISBN (Electronic)9781803926216
ISBN (Print)9781803926209
DOIs
Publication statusPublished - 19 Mar 2024

Publication series

NameResearch Handbooks in Business and Management series
PublisherEdward Elgar

Keywords

  • interpretable AI
  • explainable AI
  • AI-augmented decision making
  • tensions
  • medical imaging
  • medical diagnosis

Fingerprint

Dive into the research topics of 'Interpretable artificial intelligence systems in medical imaging: Review and theoretical framework'. Together they form a unique fingerprint.

Cite this