Ontologies, exploiting Description Logics (DLs) as the representational underpinning, provide a logic-based data model for knowledge processing thereby supporting intelligent reasoning of domain knowledge for various applications, most evidently for modern biomedical, life science and text mining applications. However, with their growing utilisation, not only has the number of available ontologies increased considerably, but they are also blowing up in size and becoming more complex to manage. Moreover, capturing domain knowledge in the form of ontologies is labour-intensive work which is expensive from an implementation perspective. There is a strong demand for techniques and automated tools for creating restricted views of ontologies while preserving complete information up to the restricted views. Forgetting is a non-standard reasoning technique which provides such a service by eliminating concept and role symbols from ontologies in a way such that all logical consequences are preserved up to the remaining signature. It has turned out to be very useful in ontology-based knowledge processing, as it allows users to focus on specific parts of (usually very large) ontologies for easy reuse, or to zoom in on (usually very complex) ontologies for in-depth analysis. Other uses of forgetting are information hiding, explanation generation, abduction, ontology debugging and repair, and computing logical differences between ontology versions. Despite its notable usefulness as described above, forgetting, on the other hand, is an inherently difficult problem --- it is much harder than standard reasoning (satisfiability testing) --- and very few logics are known to be complete for forgetting, there has been insufficient research on the topic and few forgetting tools are available. This thesis investigates practical methods for semantic forgetting in expressive description logics not considered before. In particular, we present a practical method for forgetting concept symbols from ontologies expressible in the description logic ALCOI, i.e. the basic ALC extended with nominals and inverse roles. Being based on a generalisation of a monotonicity property called Ackermann's Lemma, the method is the first and only approach to concept forgetting in description logics with nominals. We also present a practical method for forgetting role symbols from ontologies expressible in the description logic ALCOIH, i.e. ALCOI extended with role hierarchies, the universal role and role conjunction. The universal role and role conjunction enrich our target language, making it expressive enough to represent the forgetting solution which otherwise would have been lost. Being based on a non-trivial generalisation of Ackermann's Lemma, the method is the first and only approach so far that provides support for role forgetting in description logics with nominals. Both methods are goal-oriented and incremental. They are terminating, and are sound in the sense that the forgetting solutions are equivalent to the original ontologies up to (the interpretations of) the symbols that have been forgotten, possibly with (the interpretations of) the symbols that have been introduced. These two methods can be used as a unifying method for forgetting both concept and role symbols from ontologies expressible in the description logic ALCOIH. The method has been implemented in Java using the OWL API and the prototypical implementation, called FAME, has been evaluated on a corpus of real-world ontologies (in order to verify its practicality). Performance results have shown that FAME was successful (i.e., eliminated all specified concept and role symbols) in most of the test cases, and in most of these cases the elimination was done within a very short period of time.
Date of Award | 1 Aug 2018 |
---|
Original language | English |
---|
Awarding Institution | - The University of Manchester
|
---|
Supervisor | David Rydeheard (Supervisor) & Renate Schmidt (Supervisor) |
---|
Automated Semantic Forgetting for Expressive Description Logics
Zhao, Y. (Author). 1 Aug 2018
Student thesis: Phd