Learning explainable representations of concepts in specialised languages: experiments in healthcare social media

  • Maksim Belousov

Student thesis: Phd

Abstract

People use specialised languages to talk about complex concepts, such as symptoms described by patients in health forums. To organise concepts, various terminologies have been created, with often more than one terminology describing the similar set of concepts. However, outside the expert community, concepts are mentioned using colloquial and vague descriptions. The most accurate machine learning models to perform concept recognition and normalisation utilise distributed word representations and are constructed as black boxes whose internals are not easily interpretable. However, it is crucial to explain reasons behind automated decisions in domains where such decisions may lead to serious consequences. This thesis introduces novel neural network architectures to learn representations of words and concepts enriched with semantic knowledge extracted from different terminologies. Such representations can recognise colloquial mentions of concepts in text and normalise them to corresponding identifiers in standard terminologies. Moreover, semantic knowledge integrated into representations can be utilised to justify decisions and improve the interpretability of opaque models. The presented methods can generate short yet sufficient human-understandable explanations for predictions. The experimental results in the healthcare social media domain demonstrated that utilisation of such enriched representations could improve performance of concept recognition and normalisation. Furthermore, generated explanations can be used as an additional confidence indicator because it was observed that unconfident decisions lead to vague explanations.
Date of Award1 Aug 2020
Original languageEnglish
Awarding Institution
  • The University of Manchester
SupervisorGoran Nenadic (Supervisor) & William Dixon (Supervisor)

Keywords

  • neural networks
  • natural language processing
  • representation learning
  • explainable ai

Cite this

'