Abstract
This paper considers the mean and variance of the two statistics, kappa and weighted kappa, which are useful in measuring agreement between two raters, in the situation where they independently allocate a sample of subjects to a prearranged set of categories.
Keywords
Affiliated Institutions
Related Publications
Interrater reliability: the kappa statistic.
The kappa statistic is frequently used to test interrater reliability. The importance of rater reliability lies in the fact that it represents the extent to which the data colle...
Integration and generalization of kappas for multiple raters.
J. A. Cohen's kappa (1960) for measuring agreement between 2 raters, using a nominal scale, has been extended for use with multiple raters by R. J. Light (1971) and J. L. Fleiss...
MEASURING AGREEMENT WHEN TWO OBSERVERS CLASSIFY PEOPLE INTO CATEGORIES NOT DEFINED IN ADVANCE
Basic to many psychological investigations is the question of agreement between observers who independently categorize people. Several recent studies have proposed measures of a...
Reliability of Psychiatric Diagnosis
In a study of interrater diagnostic reliability, 101 psychiatric inpatients were independently interviewed by physicians using a structured interview. Newly admitted patients we...
Interobserver agreement for the assessment of handicap in stroke patients.
Interobserver agreement for the assessment of handicap in stroke patients was investigated in a group of 10 senior neurologists and 24 residents from two centers. One hundred pa...
Publication Info
- Year
- 1968
- Type
- article
- Volume
- 21
- Issue
- 1
- Pages
- 97-103
- Citations
- 156
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1111/j.2044-8317.1968.tb00400.x