Abstract
Objectives: To examine the use of the multi-rater Kappa measure of agreement (Nonparametric Statististics for the Behavioural Sciences, McGraw Hill, New York, 1988) in team based, mixed method, qualitative nursing research. Design: The article presents an illustrative description of the application of the qualitative coding procedure and associated multi-rater Kappa measurement at four time points in 9 months amongst a five person health services research team. Main outcome measures: The multi-rater Kappa statistic. This is a measure of the extent to which observers achieve possible agreement beyond any agreement expected to occur by chance alone. Results: Closeness to primary qualitative research data, working relationships over time, and focused research team discussion can all lead to greater agreement and convergence at the level of descriptive coding. The method of measuring agreement between groups of coders was easily applied and appeared a feasible option for similar research projects wishing to demonstrate transparency in their coding procedures. Conclusion: Measuring agreement beyond chance by the multi-rater Kappa statistic has some utility for research teams whose qualitative coding tasks are primarily descriptive. The method offers a standard and transparent approach for demonstrating agreement between coders and should be a feature of qualitative research reporting where appropriate. © 2003 Elsevier Ltd. All rights reserved.
Original language | English |
---|---|
Pages (from-to) | 15-20 |
Number of pages | 5 |
Journal | International Journal of Nursing Studies |
Volume | 41 |
Issue number | 1 |
DOIs | |
Publication status | Published - Jan 2004 |
Keywords
- Collaborative research
- Inter-rater reliability
- Multi-rater Kappa
- Nursing decisions
- Qualitative descriptive coding