Eliciting Explainability Requirements for Safety-Critical Systems: A Nuclear Case Study

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

Abstract

[Context & Motivation] Explainable autonomous systems are increasingly essential for engendering trust, especially when they are deployed in safety-critical scenarios.

[Question/Problem] Despite the robust reliability needed in critical settings, there remains a gap between Explainable AI and Requirements Engineering (RE), raising the question: can current RE techniques sufficiently elicit explainability requirements and what characteristics do these requirements have?

[Principal Ideas/Results] We examine whether established RE techniques
can be used to elicit explainability requirements and analyse the characteristics
of such requirements. We answer these questions in the context of a nuclear robotics case study focused on navigation and task scheduling missions.

[Contribution] We contribute: (1) an experience report of eliciting explainability requirements, (2) categories for explainability requirements for explainable autonomous robotic systems and (3) practical guidance for applying our approach in other safety-critical domains.
Original languageEnglish
Title of host publicationRequirements Engineering: Foundation for Software Quality (REFSQ) 2025
Publication statusAccepted/In press - 15 Jan 2025

Fingerprint

Dive into the research topics of 'Eliciting Explainability Requirements for Safety-Critical Systems: A Nuclear Case Study'. Together they form a unique fingerprint.

Cite this