Abstract
Background:
The complexity of many AI models hinders their clinical adoption because the clinicians using them do not regard them as transparent. This study addresses the lack of clinician-centered explainable AI (XAI) interfaces by designing and evaluating intuitive visual explanations for intubation prediction, testing the hypothesis that workflow-compatible designs enhance acceptance.
Objective:
This study compares three, time-aware, visual explanations for XAI-based intubation prediction and evaluate their acceptance, comprehension, and perceived utility among clinicians.
Methods:
We developed machine learning models to estimate the near-term risk of deterioration in the patient’s condition which may lead to mechanical intubation using ICU time-series data. We generated global and local explanations using SHAP and designed three customized visual formats—a temporal force plot, a temporal bar chart, and a dual-encoded SHAP heatmap. Clinicians (n = 206) evaluated comprehension and usability using objective questions and a Likert-based survey.
Results:
Based on 4608 critically ill patients with 10 medical variables over 7 hours of data for each patient, the Random Forest (RF) model achieved the highest area under the curve (AUC): 0.94. Furthermore, the local explanations were customized and evaluated by 206 clinicians through a survey conducted on the Prolific platform. A customized heatmap representation was selected as the visualization with the highest perceived clinical utility and alignment with clinical workflows.
Discussion:
The reported findings support the need for explanation formats to be tailored to clinical reasoning and task context, supporting the concept of cognitive fit. The heatmap’s close alignment with clinicians’ mental models and its graphical integrity enhances interpretability and trust. This study demonstrates that explanation effectiveness depends on contextual relevance, rather than a universal standard, and that the presentation format itself significantly shapes clinicians’ trust in XAI systems.
Conclusion:
This study advances clinical XAI by introducing a time-aware explanation framework for ICU intubation decisions. By integrating temporal trends with model reasoning, our visualizations closely align with clinicians’ cognitive workflows. Rigorous clinician-centered evaluation identified the dual-encoded SHAP heatmap as the most useful and workflow-compatible visualization, highlighting the importance of explanation design alongside predictive accuracy for clinical adoption.
The complexity of many AI models hinders their clinical adoption because the clinicians using them do not regard them as transparent. This study addresses the lack of clinician-centered explainable AI (XAI) interfaces by designing and evaluating intuitive visual explanations for intubation prediction, testing the hypothesis that workflow-compatible designs enhance acceptance.
Objective:
This study compares three, time-aware, visual explanations for XAI-based intubation prediction and evaluate their acceptance, comprehension, and perceived utility among clinicians.
Methods:
We developed machine learning models to estimate the near-term risk of deterioration in the patient’s condition which may lead to mechanical intubation using ICU time-series data. We generated global and local explanations using SHAP and designed three customized visual formats—a temporal force plot, a temporal bar chart, and a dual-encoded SHAP heatmap. Clinicians (n = 206) evaluated comprehension and usability using objective questions and a Likert-based survey.
Results:
Based on 4608 critically ill patients with 10 medical variables over 7 hours of data for each patient, the Random Forest (RF) model achieved the highest area under the curve (AUC): 0.94. Furthermore, the local explanations were customized and evaluated by 206 clinicians through a survey conducted on the Prolific platform. A customized heatmap representation was selected as the visualization with the highest perceived clinical utility and alignment with clinical workflows.
Discussion:
The reported findings support the need for explanation formats to be tailored to clinical reasoning and task context, supporting the concept of cognitive fit. The heatmap’s close alignment with clinicians’ mental models and its graphical integrity enhances interpretability and trust. This study demonstrates that explanation effectiveness depends on contextual relevance, rather than a universal standard, and that the presentation format itself significantly shapes clinicians’ trust in XAI systems.
Conclusion:
This study advances clinical XAI by introducing a time-aware explanation framework for ICU intubation decisions. By integrating temporal trends with model reasoning, our visualizations closely align with clinicians’ cognitive workflows. Rigorous clinician-centered evaluation identified the dual-encoded SHAP heatmap as the most useful and workflow-compatible visualization, highlighting the importance of explanation design alongside predictive accuracy for clinical adoption.
| Original language | English |
|---|---|
| Article number | 106287 |
| Journal | International journal of medical informatics |
| Volume | 210 |
| Early online date | 18 Jan 2026 |
| DOIs | |
| Publication status | Published - 15 Apr 2026 |
Fingerprint
Dive into the research topics of 'Clinician preferences for explainable AI in critical care: a comparative study of interpretable models and visualizations for intubation decision support'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver