What does it mean for a clinical AI to be just: conflicts between local fairness and being fit-for-purpose?

Research output: Contribution to journalArticlepeer-review

3 Downloads (Pure)

Abstract

There have been repeated calls to ensure that clinical artificial intelligence (AI) is not discriminatory, that is, it provides its intended benefit to all members of society irrespective of the status of any protected characteristics of individuals in whose healthcare the AI might participate. There have also been repeated calls to ensure that any clinical AI is tailored to the local population in which it is being used to ensure that it is fit-for-purpose. Yet, there might be a clash between these two calls since tailoring an AI to a local population might reduce its effectiveness when the AI is used in the care of individuals who have characteristics which are not represented in the local population. Here, I explore the bioethical concept of local fairness as applied to clinical AI. I first introduce the discussion concerning fairness and inequalities in healthcare and how this problem has continued in attempts to develop AI-enhanced healthcare. I then discuss various technical aspects which might affect the implementation of local fairness. Next, I introduce some rule of law considerations into the discussion to contextualise the issue better by drawing key parallels. I then discuss some potential technical solutions which have been proposed to address the issue of local fairness. Finally, I outline which solutions I consider most likely to contribute to a fit-for-purpose and fair AI.
Original languageEnglish
JournalJournal of Medical Ethics
Early online date29 Feb 2024
DOIs
Publication statusE-pub ahead of print - 29 Feb 2024

Fingerprint

Dive into the research topics of 'What does it mean for a clinical AI to be just: conflicts between local fairness and being fit-for-purpose?'. Together they form a unique fingerprint.

Cite this