Clinical Negligence in an Age of Machine Learning: Res Ipsa Loquitur to the Rescue?

Research output: Contribution to journalArticlepeer-review

Abstract

Advanced artificial intelligence techniques such as ‘deep learning’ holds promise in healthcare but introduces novel legal problems. Complex machine learning algorithms are intrinsically opaque, and the autonomous nature of systems can produce unexpected harms, which leaves open questions around responsibility for error at the clinician/AI interface. This raises concerns for compensation systems based in negligence because claimants must establish that a duty exists and demonstrate the specific fault that caused harm.

This paper argues that clinicians should not ordinarily be negligent for following AI recommendations, and developers are unlikely to hold a duty of care to patients. The healthcare provider is likely to be the duty holder for AI systems. There are practical and conceptual problems with comparing AI errors to human performance or other AI systems to determine negligence. This could leave claimants with unsurmountable technical and legal challenges to obtaining compensation. Res ipsa loquitur could solve these problems by allowing the courts to draw an inference of negligence when unexpected harm occurs that would not ordinarily happen without negligence. This legal framework is potentially well-suited to addressing the challenges of AI systems. However, I argue res ipsa is primarily an instrument of discretion, which may perpetuate legal uncertainty and still leave some claimants without a remedy.
Original languageEnglish
JournalJournal of European Tort Law
Publication statusAccepted/In press - 2024

Keywords

  • Artificial intelligence (AI)
  • Negligence
  • Res Ipsa Loquitur

Fingerprint

Dive into the research topics of 'Clinical Negligence in an Age of Machine Learning: Res Ipsa Loquitur to the Rescue?'. Together they form a unique fingerprint.

Cite this