What is SemEval evaluating? A Systematic Analysis of Evaluation Campaigns in NLP

Oskar Wysocki, Malina Florea, Andre Freitas

Research output: Contribution to journalArticle

81 Downloads (Pure)

Abstract

SemEval is the primary venue in the NLP community for the proposal of new challenges and for the systematic empirical evaluation of NLP systems. This paper provides a systematic quantitative analysis of SemEval aiming to evidence the patterns of the contributions behind SemEval. By understanding the distribution of task types, metrics, architectures, participation and citations over time we aim to answer the question on what is being evaluated by SemEval.
Original languageEnglish
JournalArXiv
DOIs
Publication statusPublished - 28 May 2020

Keywords

  • cs.CL
  • cs.AI

Fingerprint

Dive into the research topics of 'What is SemEval evaluating? A Systematic Analysis of Evaluation Campaigns in NLP'. Together they form a unique fingerprint.

Cite this