Evaluating discourse and dialogue coding schemes

Richard Craggs, Mary Mcgee Wood

    Research output: Contribution to journalArticlepeer-review

    Abstract

    Agreement statistics play an important role in the evaluation of coding schemes for discourse and dialogue. Unfortunately there is a lack of understanding regarding appropriate agreement measures and how their results should be interpreted. In this article we describe the role of agreement measures and argue that only chance-corrected measures that assume a common distribution of labels for all coders are suitable for measuring agreement in reliability studies. We then provide recommendations for how reliability should be inferred from the results of agreement statistics. © 2005 Association for Computational Linguistics.
    Original languageEnglish
    Pages (from-to)289-295
    Number of pages6
    JournalComputational Linguistics
    Volume31
    Issue number3
    DOIs
    Publication statusPublished - 1 Sept 2005

    Fingerprint

    Dive into the research topics of 'Evaluating discourse and dialogue coding schemes'. Together they form a unique fingerprint.

    Cite this