Abstract

J. A. Cohen's kappa (1960) for measuring agreement between 2 raters, using a nominal scale, has been extended for use with multiple raters by R. J. Light (1971) and J. L. Fleiss (1971). In the present article, these indices are analyzed and reformulated in terms of agreement statistics based on all

Keywords

GeneralizationPsychologySocial psychologyStatisticsMathematics

Affiliated Institutions

Related Publications

Observer Reliability and Agreement

Abstract The terms observer reliability and observer agreement represent different concepts. Reliability coefficients express the ability to differentiate between subjects. Agre...

2005 Encyclopedia of Biostatistics 62 citations

Publication Info

Year
1980
Type
article
Volume
88
Issue
2
Pages
322-328
Citations
510
Access
Closed

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

510
OpenAlex
44
Influential
394
CrossRef

Cite This

Anthony J. Conger (1980). Integration and generalization of kappas for multiple raters.. Psychological Bulletin , 88 (2) , 322-328. https://doi.org/10.1037/0033-2909.88.2.322

Identifiers

DOI
10.1037/0033-2909.88.2.322

Data Quality

Data completeness: 77%