Verifiable Machine Ethics in Changing Contexts

Louise Dennis, Martin Mose Benzen, Felix Lindner, Michael Fisher

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review


Many systems proposed for the implementation of ethical reasoning involve an encoding of user values as a set of rules or a model. We consider the question of how changes of context affect these encodings. We propose the use of a reasoning cycle, in which information about the ethical reasoner’s context is imported in a logical form, and we propose that context-specific aspects of an ethical encoding be prefaced by a guard formula. This guard formula should evaluate to true when the reasoner is in the appropriate context and the relevant parts of the reasoner’s rule set or model should be updated accordingly. This architecture allows techniques for the model-checking of agent-based autonomous systems to be used to verify that all contexts respect key stakeholder values. We implement this framework using the hybrid ethical reasoning agents system (HERA) and the model-checking agent programming languages (MCAPL) framework.
Original languageEnglish
Title of host publication35th AAAI Conference on Artificial Intelligence
Publication statusAccepted/In press - 2 Dec 2020
Event35th AAAI Conference on Artificial Intelligence -
Duration: 2 Feb 20219 Feb 2021


Conference35th AAAI Conference on Artificial Intelligence


Dive into the research topics of 'Verifiable Machine Ethics in Changing Contexts'. Together they form a unique fingerprint.

Cite this