A survey of methods for revealing and overcoming weaknesses of data-driven Natural Language Understanding

Research output: Contribution to journalArticlepeer-review

Abstract

Recent years have seen a growing number of publications that analyse Natural Language Understanding (NLU) datasets for superficial cues, whether they undermine the complexity of the tasks underlying those datasets and how they impact those models that are optimised and evaluated on this data. This structured survey provides an overview of the evolving research area by categorising reported weaknesses in models and datasets and the methods proposed to reveal and alleviate those weaknesses for the English language. We summarise and discuss the findings and conclude with a set of recommendations for possible future research directions. We hope that it will be a useful resource for researchers who propose new datasets to assess the suitability and quality of their data to evaluate various phenomena of interest, as well as those who propose novel NLU approaches, to further understand the implications of their improvements with respect to their model’s acquired capabilities.
Original languageEnglish
Pages (from-to)1-31
Number of pages31
JournalNatural Language Engineering
Volume29
Issue number1
DOIs
Publication statusPublished - 22 Jan 2023

Keywords

  • Dataset artefacts
  • Deep learning
  • Machine reading comprehension
  • Natural Language Understanding
  • Textual entailment

Fingerprint

Dive into the research topics of 'A survey of methods for revealing and overcoming weaknesses of data-driven Natural Language Understanding'. Together they form a unique fingerprint.

Cite this