What is SemEval evaluating? A Systematic Analysis of Evaluation Campaigns in NLP

Oskar Wysocki, Malina Florea, Donal Landers, Andre Freitas

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

19 Downloads (Pure)

Abstract

SemEval is the primary venue in the NLP community for the proposal of new challenges and for the systematic empirical evaluation of NLP systems. This paper provides a systematic quantitative analysis of SemEval aiming to evidence the patterns of the contributions behind SemEval. By understanding the distribution of task types, metrics, architectures, participation and citations over time we aim to answer the question on what is being evaluated by SemEval.
Original languageEnglish
Title of host publicationProceedings of the 2nd Workshop on Evaluation and Comparison of NLP Systems
PublisherAssociation for Computational Linguistics
DOIs
Publication statusE-pub ahead of print - 4 Nov 2021

Fingerprint

Dive into the research topics of 'What is SemEval evaluating? A Systematic Analysis of Evaluation Campaigns in NLP'. Together they form a unique fingerprint.

Cite this