Computing Spaces of Independent Explanations via Abductive Reasoning

Student thesis: Phd

Abstract

This thesis investigates methods for abductive reasoning in large knowledge bases. Abduction refers to the process of explaining new observations using prior knowledge, which enables tasks that require the generation of new hypotheses including scientific discovery, belief expansion, diagnostics, language interpretation and inductive learning. This thesis focuses on knowledge represented using Description Logics (DLs), which are commonly used to model information in domains such as bioinformatics, healthcare, robotics and natural language processing. A variety of research has been conducted on abduction in DLs, though it remains a hard problem. In this work, the aim is to produce hypotheses that take the form of a set of explanations and are semantically minimal. Producing a set of explanations, rather than a single one, provides a way to examine multiple avenues of explaining the observation, providing further insight into both the observation and the available knowledge. Semantic minimality limits hypotheses to those that assume no more than is necessary to explain the observation given the existing knowledge. For the general application of abduction, it is natural to first seek explanations that are likely, but limit the strength of initial assumptions until further evidence is available. This provides a useful mechanism for ordering hypotheses, seeking the least assumptive (weakest) ones first, which can then be refined through further investigation. However, semantic minimality is problematic in the presence of disjunction as it permits any number of redundant explanations (disjuncts). Therefore, disjunction has previously been excluded from solutions when considering semantic minimality. In this work, this problem must be addressed as the hypothesis takes the form of a set of possible explanations, represented as a disjunction. Therefore, a new DL abduction problem is defined. The proposed problem introduces a notion of independence, where explanations must not contradict existing knowledge and must not express information that is contained within the other explanations. This problem is the first to consider both semantic minimality and independence of explanations together in the DL setting. The issue of permitting language extensions in the hypotheses is also discussed and motivated, as this makes the abduction problem considered here significantly different to prior work. An example of this is disjunctions of DL axioms used to represent the hypotheses. To solve the problem, novel methods that utilise the connection between forgetting and abduction are proposed. The need for further investigation into this connection in the DL setting is addressed, including investigation of the characteristics of forgetting solutions in relation to the proposed problem and the development of efficient methods for eliminating redundant explanations. Extensions to existing forgetting tools required for expressive abduction are also proposed and implemented. The forgetting-based abduction approaches developed are evaluated over corpora containing real ontologies. The results indicate that in the majority of cases, the approaches can efficiently compute spaces of explanations over ALC knowledge bases with tens of thousands of axioms. The use of the disjunctive hypotheses produced by forgetting-based abduction approaches is also explored with respect to the problems of hypothesis refinement, induction and concept learning in ontologies, to provide suggestions on how the characteristics of these hypotheses may be utilised in practice.
Date of Award1 Aug 2022
Original languageEnglish
Awarding Institution
  • The University of Manchester
SupervisorRenate Schmidt (Supervisor)

Keywords

  • Abductive Reasoning; Ontologies; Artificial Intelligence; Knowledge Representation and Reasoning; Hypothesis Generation

Cite this

'