An Abstract Architecture for Explainable Autonomy in Hazardous Environments

Matt Luckuck, Hazel Taylor, Marie Farrell

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

186 Downloads (Pure)

Abstract

Autonomous robotic systems are being proposed for use in hazardous environments, often to reduce the risks to human workers. In the immediate future, it is likely that human workers will continue to use and direct these autonomous robots, much like other computerised tools but with more sophisticated decision-making. Therefore, one important area on which to focus engineering effort is ensuring that these users trust the system. Recent literature suggests that explainability is closely related to how trustworthy a system is. Like safety and security properties, explainability should be designed into a system, instead of being added afterwards. This paper presents an abstract architecture that supports an autonomous system explaining its behaviour (explainable autonomy), providing a design template for imple- menting explainable autonomous systems. We present a worked example of how our architecture could be applied in the civil nuclear industry, where both workers and regulators need to trust the system’s decision-making capabilities.
Original languageEnglish
Title of host publicationIEEE 30th International Requirements Engineering Conference Workshops (REW)
Subtitle of host publication2nd International Workshop on Requirements Engineering for Explainable Systems
PublisherIEEE 30th International Requirements Engineering Conference Workshops
Pages108-113
Number of pages6
DOIs
Publication statusPublished - 15 Aug 2022

Fingerprint

Dive into the research topics of 'An Abstract Architecture for Explainable Autonomy in Hazardous Environments'. Together they form a unique fingerprint.

Cite this