A Framework for Evaluation of Machine Reading Comprehension Gold Standards

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

73 Downloads (Pure)

Abstract

Machine Reading Comprehension (MRC) is the task of answering a question over a paragraph of text. While neural MRC systems gain popularity and achieve noticeable performance, issues are being raised with the methodology used to establish their performance, particularly concerning the data design of gold standards that are used to evaluate them. There is but a limited understanding of the challenges present in this data, which makes it hard to draw comparisons and formulate reliable hypotheses. As a first step towards alleviating the problem, this paper proposes a unifying framework to systematically investigate the present linguistic features, required reasoning and background knowledge and factual correctness on one hand, and the presence of lexical cues as a lower bound for the requirement of understanding on the other hand. We propose a qualitative annotation schema for the first and a set of approximative metrics for the latter. In a first application of the framework, we analyse modern MRC gold standards and present our findings: the absence of features that contribute towards lexical ambiguity, the varying factual correctness of the expected answers and the presence of lexical cues, all of which potentially lower the reading comprehension complexity and quality of the evaluation data.
Original languageEnglish
Title of host publication12th International Conference on Language Resources and Evaluation
PublisherEuropean Language Resources Association
VolumeProceedings of the 12th Language Resources and Evaluation Conference
Publication statusPublished - 1 May 2020

Keywords

  • cs.CL

Fingerprint

Dive into the research topics of 'A Framework for Evaluation of Machine Reading Comprehension Gold Standards'. Together they form a unique fingerprint.

Cite this