On Proactive, Transparent, and Verifiable Ethical Reasoning for Robots

Paul Bremner, Louise A. Dennis, Michael Fisher, Alan F. Winfield

Research output: Contribution to journalArticlepeer-review

Abstract

Previous work on ethical machine reasoning has largely been theoretical, and where such systems have been implemented, it has, in general, been only initial proofs of principle. Here, we address the question of desirable attributes for such systems to improve their real world utility, and how controllers with these attributes might be implemented. We propose that ethically critical machine reasoning should be proactive, transparent, and verifiable. We describe an architecture where the ethical reasoning is handled by a separate layer, augmenting a typical layered control architecture, ethically moderating the robot actions. It makes use of a simulation-based internal model and supports proactive, transparent, and verifiable ethical reasoning. To do so, the reasoning component of the ethical layer uses our Python-based belief-desire-intention (BDI) implementation. The declarative logic structure of BDI facilitates both transparency, through logging of the reasoning cycle, and formal verification methods. To prove the principles of our approach, we use a case study implementation to experimentally demonstrate its operation. Importantly, it is the first such robot controller where the ethical machine reasoning has been formally verified.
Original languageEnglish
Pages (from-to)541-561
JournalIEEE. Proceedings
Volume107
Issue number3
Early online date21 Feb 2019
DOIs
Publication statusPublished - Mar 2019

Fingerprint

Dive into the research topics of 'On Proactive, Transparent, and Verifiable Ethical Reasoning for Robots'. Together they form a unique fingerprint.

Cite this