TY - JOUR
T1 - Lack of agreement between rheumatologists in defining digital ulceration in systemic sclerosis
AU - Herrick, Ariane L.
AU - Roberts, Christopher
AU - Tracey, Andrew
AU - Silman, Alan
AU - Anderson, Marina
AU - Goodfield, Mark
AU - McHugh, Neil
AU - Muir, Lindsay
AU - Denton, Christopher P.
PY - 2009/3
Y1 - 2009/3
N2 - Objective. To test the intra- and interobserver variability, among clinicians with an interest in systemic sclerosis (SSc), in defining digital ulcers. Methods. Thirty-five images of finger lesions, incorporating a wide range of abnormalities at different sites, were duplicated, yielding a data set of 70 images. Physicians with an interest in SSc were invited to take part in the Web-based study, which involved looking through the images in a random sequence. The sequence differed for individual participants and prevented crosschecking with previous images. Participants were asked to grade each image as depicting "ulcer" or "no ulcer," and if "ulcer," then either "inactive" or "active." Images of a range of exemplar lesions were available for reference purposes while participants viewed the test images. Intrarater reliability was assessed using a weighted kappa coefficient with quadratic weights. Interrater reliability was estimated using a multirater weighted kappa coefficient. Results. Fifty individuals (most of them rheumatologists) from 15 countries participated in the study. There was a high level of intrarater reliability, with a mean weighted kappa value of 0.81 (95% confidence interval [95% CI] 0.77, 0.84). Interrater reliability was poorer (weighted κ = 0.46 [95% CI 0.35, 0.57]). Conclusion. The poor interrater reliability suggests that if digital ulceration is to be used as an end point in multicenter clinical trials of SSc, then strict definitions must be developed. The present investigation also demonstrates the feasibility of Web-based studies, for which large numbers of participants can be recruited over a short time frame. © 2009, American College of Rheumatology.
AB - Objective. To test the intra- and interobserver variability, among clinicians with an interest in systemic sclerosis (SSc), in defining digital ulcers. Methods. Thirty-five images of finger lesions, incorporating a wide range of abnormalities at different sites, were duplicated, yielding a data set of 70 images. Physicians with an interest in SSc were invited to take part in the Web-based study, which involved looking through the images in a random sequence. The sequence differed for individual participants and prevented crosschecking with previous images. Participants were asked to grade each image as depicting "ulcer" or "no ulcer," and if "ulcer," then either "inactive" or "active." Images of a range of exemplar lesions were available for reference purposes while participants viewed the test images. Intrarater reliability was assessed using a weighted kappa coefficient with quadratic weights. Interrater reliability was estimated using a multirater weighted kappa coefficient. Results. Fifty individuals (most of them rheumatologists) from 15 countries participated in the study. There was a high level of intrarater reliability, with a mean weighted kappa value of 0.81 (95% confidence interval [95% CI] 0.77, 0.84). Interrater reliability was poorer (weighted κ = 0.46 [95% CI 0.35, 0.57]). Conclusion. The poor interrater reliability suggests that if digital ulceration is to be used as an end point in multicenter clinical trials of SSc, then strict definitions must be developed. The present investigation also demonstrates the feasibility of Web-based studies, for which large numbers of participants can be recruited over a short time frame. © 2009, American College of Rheumatology.
KW - Humans
KW - Internet
KW - Observer Variation
KW - standards: Rheumatology
KW - pathology: Scleroderma, Systemic
KW - diagnosis: Skin Ulcer
U2 - 10.1002/art.24333
DO - 10.1002/art.24333
M3 - Article
C2 - 19248100
SN - 2151-464X
VL - 60
SP - 878
EP - 882
JO - Arthritis Care & Research
JF - Arthritis Care & Research
IS - 3
ER -