Abstract
Agreement statistics play an important role in the evaluation of coding schemes for discourse and dialogue. Unfortunately there is a lack of understanding regarding appropriate agreement measures and how their results should be interpreted. In this article we describe the role of agreement measures and argue that only chance-corrected measures that assume a common distribution of labels for all coders are suitable for measuring agreement in reliability studies. We then provide recommendations for how reliability should be inferred from the results of agreement statistics. © 2005 Association for Computational Linguistics.
Original language | English |
---|---|
Pages (from-to) | 289-295 |
Number of pages | 6 |
Journal | Computational Linguistics |
Volume | 31 |
Issue number | 3 |
DOIs | |
Publication status | Published - 1 Sept 2005 |