Human-AI joint task performance: Learning from uncertainty in autonomous driving systems

Panos Constantinides, Eric Monteiro, Lars Mathiassen

Research output: Contribution to journalArticlepeer-review

7 Downloads (Pure)


High uncertainty tasks such as making a medical diagnosis, judging a criminal justice case and driving in a big city have a very low margin for error because of the potentially devastating consequences for human lives. In this paper, we focus on how humans learn from uncertainty while performing a high uncertainty task with AI systems. We analyze Tesla's autonomous driving systems (ADS), a type of AI system, drawing on crash investigation reports, published reports on formal simulation tests and YouTube recordings of informal simulation tests by amateur drivers. Our empirical analysis provides insights into how varied levels of uncertainty tolerance have implications for how humans learn from uncertainty in real-time and over time to jointly perform the driving task with Tesla's ADS. Our core contribution is a theoretical model that explains human-AI joint task performance. Specifically, we show that, the interdependencies between different modes of AI use including uncontrolled automation, limited automation, expanded automation, and controlled automation are dynamically shaped through humans' learning from uncertainty. We discuss how humans move between these modes of AI use by increasing, reducing, or reinforcing their uncertainty tolerance. We conclude by discussing implications for the design of AI systems, policy into delegation in joint task performance, as well as the use of data to improve learning from uncertainty.
Original languageEnglish
Article number100502
JournalInformation and Organization
Issue number2
Early online date30 Jan 2024
Publication statusPublished - 1 Jun 2024


Dive into the research topics of 'Human-AI joint task performance: Learning from uncertainty in autonomous driving systems'. Together they form a unique fingerprint.

Cite this