Abstract
Autonomous robotic systems are being proposed for use in hazardous environments, often to reduce the risks to human workers. In the immediate future, it is likely that human workers will continue to use and direct these autonomous robots, much like other computerised tools but with more sophisticated decision-making. Therefore, one important area on which to focus engineering effort is ensuring that these users trust the system. Recent literature suggests that explainability is closely related to how trustworthy a system is. Like safety and security properties, explainability should be designed into a system, instead of being added afterwards. This paper presents an abstract architecture that supports an autonomous system explaining its behaviour (explainable autonomy), providing a design template for imple- menting explainable autonomous systems. We present a worked example of how our architecture could be applied in the civil nuclear industry, where both workers and regulators need to trust the system’s decision-making capabilities.
Original language | English |
---|---|
Title of host publication | IEEE 30th International Requirements Engineering Conference Workshops (REW) |
Subtitle of host publication | 2nd International Workshop on Requirements Engineering for Explainable Systems |
Publisher | IEEE 30th International Requirements Engineering Conference Workshops |
Pages | 108-113 |
Number of pages | 6 |
DOIs | |
Publication status | Published - 15 Aug 2022 |