Abstract
The recent focus on explainable artificial intelligence has been driven by a perception that complex statistical models are opaque to users. Rule-based systems, in contrast, have often been presented as self-explanatory. All the system needs to do is provide a log of its reasoning process and its operations are clear. We believe that such logs are often difficult for users to understand in part because of their size and complexity. We propose dialogue as an explanatory mechanism for rulebased AI systems to allow users and systems to co-create an explanation that focuses on the user’s particular interests or concerns. Our hypothesis is that when a system makes a deduction that was, in some way, unexpected by the user then locating the source of the disagreement or misunderstanding is best achieved through a collaborative dialogue process that allows the participants to gradually isolate the cause. We have implemented a system with this mechanism and performed a user evaluation that shows that in many cases a dialogue is preferred to a reasoning log presented as a tree. These results provide further support for the hypothesis that dialogue explanation could provide a good explanation for a rule-based AI system.
Original language | English |
---|---|
Title of host publication | 3rd Workshop on Explainable Logic-Based Knowledge Representation |
Publication status | Accepted/In press - 14 Jun 2022 |