TY - JOUR
T1 - Assessing the communication gap between AI models and healthcare professionals
T2 - Explainability, utility and trust in AI-driven clinical decision-making
AU - Wysocki, Oskar
AU - Davies, Jessica Katharine
AU - Vigo, Markel
AU - Armstrong, Anne Caroline
AU - Landers, Dónal
AU - Lee, Rebecca
AU - Freitas, André
N1 - Funding Information:
This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No 965397 . Funding for developing the CORONET online tool has been provided by The Christie Charitable Fund ( 1049751 ). Dr Rebecca Lee is supported by the National Institute for Health Research .
Publisher Copyright:
© 2022 The Author(s)
PY - 2023/3/1
Y1 - 2023/3/1
N2 - This paper contributes with a pragmatic evaluation framework for explainable Machine Learning (ML) models for clinical decision support. The study revealed a more nuanced role for ML explanation models, when these are pragmatically embedded in the clinical context. Despite the general positive attitude of healthcare professionals (HCPs) towards explanations as a safety and trust mechanism, for a significant set of participants there were negative effects associated with confirmation bias, accentuating model over-reliance and increased effort to interact with the model. Also, contradicting one of its main intended functions, standard explanatory models showed limited ability to support a critical understanding of the limitations of the model. However, we found new significant positive effects which repositions the role of explanations within a clinical context: these include reduction of automation bias, addressing ambiguous clinical cases (cases where HCPs were not certain about their decision) and support of less experienced HCPs in the acquisition of new domain knowledge.
AB - This paper contributes with a pragmatic evaluation framework for explainable Machine Learning (ML) models for clinical decision support. The study revealed a more nuanced role for ML explanation models, when these are pragmatically embedded in the clinical context. Despite the general positive attitude of healthcare professionals (HCPs) towards explanations as a safety and trust mechanism, for a significant set of participants there were negative effects associated with confirmation bias, accentuating model over-reliance and increased effort to interact with the model. Also, contradicting one of its main intended functions, standard explanatory models showed limited ability to support a critical understanding of the limitations of the model. However, we found new significant positive effects which repositions the role of explanations within a clinical context: these include reduction of automation bias, addressing ambiguous clinical cases (cases where HCPs were not certain about their decision) and support of less experienced HCPs in the acquisition of new domain knowledge.
KW - Automation bias
KW - Clinical decision support
KW - Confirmation bias
KW - Explainable AI
KW - Explainable model
KW - Explanation's impact
KW - ML in healthcare
KW - User study
UR - http://www.scopus.com/inward/record.url?scp=85147088416&partnerID=8YFLogxK
UR - https://www.mendeley.com/catalogue/be703f1f-a2a8-3704-98e2-05b81b074133/
U2 - 10.1016/j.artint.2022.103839
DO - 10.1016/j.artint.2022.103839
M3 - Article
AN - SCOPUS:85147088416
SN - 0004-3702
VL - 316
JO - Artificial Intelligence
JF - Artificial Intelligence
M1 - 103839
ER -