Should AI Systems in Nuclear Facilities Explain Decisions the Way Humans Do? An Interview Study

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

1 Downloads (Pure)

Abstract

There is a growing interest in the use of robotics and AI in the nuclear industry, however it is important to ensure these systems are ethically grounded, trustworthy and safe. An emerging technique to address these concerns is the use of explainability. In this paper we present the results of an interview study with nuclear industry experts to explore the use of explainable intelligent systems within the field. We interviewed 16 participants with varying backgrounds of expertise, and presented two potential use cases for evaluation; a navigation scenario and a task scheduling scenario. Through an inductive thematic analysis we identified the aspects of a deployment that experts want to know from explainable systems and we outline how these associate with the folk conceptual theory of explanation, a framework in which people explain behaviours. We established that an intelligent system should explain its reasons for an action, its expectations of itself, changes in the environment that impact decision making, probabilities and the elements within them, safety implications and mitigation strategies, robot health and component failures during decision making in nuclear deployments. We determine that these factors could be explained with cause, reason, and enabling factor explanations.
Original languageEnglish
Title of host publication31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN 2022)
PublisherIEEE
ISBN (Electronic)978-1-6654-0680-2
DOIs
Publication statusPublished - 30 Sept 2022

Fingerprint

Dive into the research topics of 'Should AI Systems in Nuclear Facilities Explain Decisions the Way Humans Do? An Interview Study'. Together they form a unique fingerprint.

Cite this