Differentiable Constraint-based Solvers for Explanation-based Multi-hop Inference

Student thesis: Phd

Abstract

Explanation-based Question Answering (XQA) for complex questions involving scientific and common-sense reasoning is often modelled as a multi-hop reasoning. Constrained optimization solvers based on Integer Linear Programming (ILP) have been proposed to address these multi-hop inference tasks. This family of approaches provides a viable mechanism to encode explicit and controllable assumptions, casting multi-hop as an optimal subgraph selection problem. However, these approaches have shown diminishing returns with an increasing number of hops suffering from a phenomenon called semantic drift. Moreover, these approaches are typically non-differentiable and cannot be integrated as part of a deep neural network. This shortcoming prevents these methods from learning end-to-end on annotated corpora and achieving performance comparable to deep learning counterparts. This thesis attempts to solve these problems by presenting the following contributions: (1) Introduce a novel model (ExplantionLP) that performs inference encoding grounding-abstract chains for explanation-based multi-hop inference and reduces semantic drift. We demonstrate that ExplantionLP is more robust to semantic drift when compared with graph-based and transformer-based approaches. (2) Present the first hybrid model (Diff-Explainer) that integrates constrained optimization as part of a deep neural network via differentiable convex optimization, allowing the fine-tuning of pre-trained transformers for downstream explanation-based multi-hop Inference task. We empirically demonstrate on scientific and common-sense QA benchmarks that integrating explicit constraints in an end-to-end differentiable framework can significantly improve the performance of non-differentiable ILP solvers. (3) Propose a novel hybrid model (Diff-Comb Explainer) that integrates constrained optimization as part of a deep neural network via Differentiable BlackBox Combinatorial solvers, allowing the fine-tuning of pre-trained transformers for downstream explanation-based multi-hop Inference task. Diff-Comb Explainer demonstrates improved answer and explanation selection accuracy over non-differentiable solvers, transformers and existing differentiable constraint-based multi-hop inference frameworks. We also present a systematic review of the explainable natural language inference field. In this survey, we present an analysis of existing benchmarks and models. Additionally, identifying the emerging research trends and highlighting challenges and opportunities for future work.
Date of Award1 Aug 2023
Original languageEnglish
Awarding Institution
  • The University of Manchester
SupervisorAndre Freitas (Supervisor) & Lucas Cordeiro (Supervisor)

Keywords

  • Question Answering
  • Explainable Artificial Intelligence
  • Explanation-based Inference

Cite this

'