TY - JOUR
T1 - Ethics framework for predictive clinical AI model updating
AU - Pruski, Michal
N1 - Funding Information:
I would like to thank Matthew Sperrin and Robert Palmer for helpful advice on this manuscript, as well as Nathan Proudlove for the opportunity to explore this topic during my studies and for comments on an assignment on which this submission is based; moreover, I thank David A. Jenkins for pointing me to some key papers. I also want to thank the anonymous reviewers for their constructive feedback during the review process.
Publisher Copyright:
© 2023, The Author(s), under exclusive licence to Springer Nature B.V.
PY - 2023/9/8
Y1 - 2023/9/8
N2 - There is an ethical dilemma present when considering updating predictive clinical artificial intelligence (AI) models, which should be part of the departmental quality improvement process. One needs to consider whether withdrawing the AI model is necessary to obtain the relevant information from a naive patient population or whether to use causal inference techniques to obtain this information. Withdrawing an AI model from patient care might pose challenges if the AI model is considered standard of care, while use of causal inference will not be reliable if the relevant statistical assumptions do not hold true. Hence, each of these two updating strategies is associated with patient risks, but lack of reliable data might endanger future patients. Similarly, not withdrawing an outdated AI might also expose patients to risk. Here I propose a high level ethical framework – epistemic risk management - that provides guidance for which route of model updating should be taken based on the likelihood of the assumptions used during the creation of the original AI model and the assumptions required for causal inference holding true. This approach balances our uncertainty about the status of the AI as standard of care with the risk of not obtaining the necessary data, so as to increase the probability of benefiting current and future patients for whose care the AI is being used.
AB - There is an ethical dilemma present when considering updating predictive clinical artificial intelligence (AI) models, which should be part of the departmental quality improvement process. One needs to consider whether withdrawing the AI model is necessary to obtain the relevant information from a naive patient population or whether to use causal inference techniques to obtain this information. Withdrawing an AI model from patient care might pose challenges if the AI model is considered standard of care, while use of causal inference will not be reliable if the relevant statistical assumptions do not hold true. Hence, each of these two updating strategies is associated with patient risks, but lack of reliable data might endanger future patients. Similarly, not withdrawing an outdated AI might also expose patients to risk. Here I propose a high level ethical framework – epistemic risk management - that provides guidance for which route of model updating should be taken based on the likelihood of the assumptions used during the creation of the original AI model and the assumptions required for causal inference holding true. This approach balances our uncertainty about the status of the AI as standard of care with the risk of not obtaining the necessary data, so as to increase the probability of benefiting current and future patients for whose care the AI is being used.
KW - Artificial intelligence
KW - Causal inference
KW - Ethics
KW - Healthcare
KW - Quality improvement
UR - http://www.scopus.com/inward/record.url?scp=85170271948&partnerID=8YFLogxK
UR - https://www.mendeley.com/catalogue/70cd4210-80a0-3ebc-932e-e7e36f7daea5/
U2 - 10.1007/s10676-023-09721-x
DO - 10.1007/s10676-023-09721-x
M3 - Article
AN - SCOPUS:85170271948
SN - 1388-1957
VL - 25
JO - Ethics and Information Technology
JF - Ethics and Information Technology
IS - 3
M1 - 48
ER -