What is the inter-rater agreement of injury classification using the WHO minimum data set for emergency medical teams?

Research output: Contribution to journalArticlepeer-review

72 Downloads (Pure)

Abstract

Background In 2017 the WHO produced its first minimum data set (MDS) for emergency medical team (EMT) daily reporting during sudden onset disasters (SODs), following expert consensus. The MDS was deliberately designed to be simple in order to improve the rate of data capture however it is new and untested. This study assesses the inter-rater agreement between practitioners when performing the injury aspect of coding within the WHO EMT MDS. Methods 25 clinical case vignettes were developed, reflecting potential injuries encountered in a SOD. These were presented online from April to July 2018 to practitioners who have experience of/training in managing patients in SODs The practitioners were from UK-Med’s members, AUSMAT’s Northern Territory members and NZMAT members. Practitioners were asked to code injuries according to the WHO EMT MDS case classifications. Randolph’s kappa statistic for free-marginal multi-rater data was calculated for the whole data-set as well as subgroups to ascertain inter-rater agreement Results 86 practitioners responded (20.6% response rate), giving >2000 individual case responses. Overall agreement was moderate at 67.9% with a kappa of 0.59 [CI 0.49,0.69]. Despite subgroups of paramedics (kappa 0.63 [CI 0.53,0.72]), doctors (kappa 0.61 [CI 0.52, 0.69]) and those with disaster experience (kappa 0.62 [CI 0.52, 0.71]) suggesting slightly higher agreement, their CIs (and those of other subgroups) suggest overall similar and moderate levels of practitioner agreement in classifying injuries according to the MDS categories. Conclusions An inter-rater agreement of 0.59 is moderate, at best, however it gives MoHs some sense of how tightly they may interpret injury data derived from daily reports using the WHO EMT MDS. Furthermore, this kappa is similar to established but more complex (thus more contextually impractial) injury scores. Similar studies, with weighting for injury likelihood using sample data from SODs would further refine the level of expected inter-rater agreement.
Original languageEnglish
Pages (from-to)58-64
JournalEmergency Medicine Journal
Volume37
Issue number2
Early online date7 Jan 2020
DOIs
Publication statusPublished - 7 Jan 2020

Keywords

  • data management
  • disaster planning and response
  • global health

Research Beacons, Institutes and Platforms

  • Humanitarian and Conflict Response Institute

Fingerprint

Dive into the research topics of 'What is the inter-rater agreement of injury classification using the WHO minimum data set for emergency medical teams?'. Together they form a unique fingerprint.

Cite this