Understanding changes in an ontology is becoming an active topic of interest to ontology engineers because of the increasing number of requirements to better support and maintain large collaborative ontologies. Ontology support and debugging mechanisms have mainly addressed errors in ontologies derived from reasoning tasks such as checking concept satisfiability and ontology consistency. Although debugging and tools to help the understanding of entailments have been introduced in the past decade, see [1, 2], these do not address the desirability and expectations of the entailments. Currently, logical faults in ontologies are treated in a vacuum approach that does not take into consideration the information available regarding the entailment evolution of the ontology as recorded in ontology versions, the expectation of entailments, and how the ontology and its logical consequences comply with historical changes. In this thesis we present a novel approach for detecting logical warnings that are directly linked to the desirability and expectation of entailments as recorded in the ontology's versions. We first introduce methods for evaluating ontology evolution trends, editing dynamics, and identify versions that correspond to areas of major change in the ontology. This lifetime view of the ontology gives background information regarding the growth and change of the ontology from an axiom centric perspective and their entailment presence through out the studied versions. We then subject the asserted ax- ioms from each version to a cross-functional and systematic analyses of changes, the effectiveness of these changes, and the consistency of these changes in future versions. From this detailed axiom change record and their entailment profiles, we derived en- tailment warnings that indicate or suggest domain modelling bugs in terms of content redundancy, regression, refactoring, and thrashing.We validate and confirm these methods by analysing a ten year evolution period of the National Cancer Research ontology NCIt. We present a detailed entailment report for each of the problematic axioms that contain domain modelling bugs, and provide a clear summary of the versions where these axioms introduce logical warnings. This detailed report of entailment history and the detection of domain modelling bugs is done without in-depth domain knowledge and purely derived from the publicly avail- able versions of the ontology. It is through this distinctive usage of ontology versions that we pioneer the detection of domain modelling bugs as logical warnings based on the evaluation of expected and wanted entailments.
|Date of Award||3 Jan 2016|
- The University of Manchester
|Supervisor||Robert Stevens (Supervisor) & Bijan Parsia (Supervisor)|