Abstract

Items such as physical exam findings, radiographic interpretations, or other diagnostic tests often rely on some degree of subjective interpretation by observers. Studies that measure the agreement between two or more observers should include a statistic that takes into account the fact that observers will sometimes agree or disagree simply by chance. The kappa statistic (or kappa coefficient) is the most commonly used statistic for this purpose. A kappa of 1 indicates perfect agreement, whereas a kappa of 0 indicates agreement equivalent to chance. A limitation of kappa is that it is affected by the prevalence of the finding under observation. Methods to overcome this limitation have been described.

Keywords

Cohen's kappaKappaStatisticAgreementStatisticsInterpretation (philosophy)Measure (data warehouse)MathematicsComputer scienceData mining

Affiliated Institutions

Related Publications

Publication Info

Year
2005
Type
article
Volume
37
Issue
5
Pages
360-3
Citations
7055
Access
Closed

External Links

Citation Metrics

7055
OpenAlex

Cite This

Anthony J. Viera, Joanne M. Garrett (2005). Understanding interobserver agreement: the kappa statistic.. PubMed , 37 (5) , 360-3.