Ethics framework for predictive clinical AI model updating

Research output: Contribution to journalArticlepeer-review

25 Downloads (Pure)

Abstract

There is an ethical dilemma present when considering updating predictive clinical artificial intelligence (AI) models, which should be part of the departmental quality improvement process. One needs to consider whether withdrawing the AI model is necessary to obtain the relevant information from a naive patient population or whether to use causal inference techniques to obtain this information. Withdrawing an AI model from patient care might pose challenges if the AI model is considered standard of care, while use of causal inference will not be reliable if the relevant statistical assumptions do not hold true. Hence, each of these two updating strategies is associated with patient risks, but lack of reliable data might endanger future patients. Similarly, not withdrawing an outdated AI might also expose patients to risk. Here I propose a high level ethical framework – epistemic risk management - that provides guidance for which route of model updating should be taken based on the likelihood of the assumptions used during the creation of the original AI model and the assumptions required for causal inference holding true. This approach balances our uncertainty about the status of the AI as standard of care with the risk of not obtaining the necessary data, so as to increase the probability of benefiting current and future patients for whose care the AI is being used.

Original languageEnglish
Article number48
JournalEthics and Information Technology
Volume25
Issue number3
DOIs
Publication statusPublished - 8 Sept 2023

Keywords

  • Artificial intelligence
  • Causal inference
  • Ethics
  • Healthcare
  • Quality improvement

Fingerprint

Dive into the research topics of 'Ethics framework for predictive clinical AI model updating'. Together they form a unique fingerprint.

Cite this