A Language Model based Framework for New Concept Placement in Ontologies

Hang Dong, Jiaoyan Chen, Yuan He, Yongsheng Gao, Ian Horrocks

Research output: Chapter in Book/Conference proceedingConference contributionpeer-review

Abstract

We investigate the task of inserting new concepts extracted from texts into an ontology using language models. We explore an approach with three steps: edge search which is to find a set of candidate locations to insert (i.e., subsumptions between concepts), edge formation and enrichment which leverages the ontological structure to produce and enhance the edge candidates, and edge selection which eventually locates the edge to be placed into. In all steps, we propose to leverage neural methods, where we apply embedding-based methods and contrastive learning with Pre-trained Language Models (PLMs) such as BERT for edge search, and adapt a BERT fine-tuning-based multi-label Edge-Cross-encoder, and Large Language Models (LLMs) such as GPT series, FLAN-T5, and Llama 2, for edge selection. We evaluate the methods on recent datasets created using the SNOMED CT ontology and the
MedMentions entity linking benchmark. The best settings in our framework use fine-tuned PLM for search and a multi-label Cross-encoder for selection. Zero-shot prompting of LLMs is still not adequate for the task, and we propose explainable instruction tuning of LLMs for improved performance. Our study shows the advantages of PLMs and highlights the encouraging performance of LLMs that motivates future studies.
Original languageEnglish
Title of host publicationEuropean Semantic Web Conference (ESWC) 2024
Publication statusAccepted/In press - 23 Feb 2024

Keywords

  • Ontology Enrichment
  • Concept Placement
  • Pre-trained Language Models
  • Large Language Models
  • SNOMED CT

Fingerprint

Dive into the research topics of 'A Language Model based Framework for New Concept Placement in Ontologies'. Together they form a unique fingerprint.

Cite this