Dialogue Explanations for Rule-Based AI Systems

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

258 Downloads (Pure)

Abstract

The need for AI systems to explain themselves is increasingly recognised as a priority, particularly in domains where incorrect decisions can result in harm and, in the worst cases, death. Explainable Artificial Intelligence (XAI) tries to produce human-understandable explanations for AI decisions. However, most XAI systems prioritize factors such as technical complexities and research-oriented goals over end-user needs, risking information overload. This research attempts to bridge a gap in current understanding and provide insights for assisting users in comprehending the rule-based system’s reasoning through dialogue. The hypothesis is that employing dialogue as a mechanism can be effective in constructing explanations. A dialogue framework for rule-based AI systems is presented, allowing the system to explain its decisions by engaging in “Why?” and “Why not?” questions and answers. We establish formal properties of this framework and present a small user study with encouraging results that compares dialogue-based explanations with proof trees produced by the AI System.
Original languageEnglish
Title of host publication5th International Workshop on EXplainable and TRAnsparent AI and Multi-Agent Systems (EXTRAAMAS 2023)
PublisherSpringer Nature
Pages59-77
DOIs
Publication statusPublished - 5 Sept 2023

Publication series

NameExplainable and Transparent AI and Multi-Agent Systems
Volume14127
ISSN (Print)0302-9743
ISSN (Electronic)1611-3349

Fingerprint

Dive into the research topics of 'Dialogue Explanations for Rule-Based AI Systems'. Together they form a unique fingerprint.

Cite this