Abstract
J. A. Cohen's kappa (1960) for measuring agreement between 2 raters, using a nominal scale, has been extended for use with multiple raters by R. J. Light (1971) and J. L. Fleiss (1971). In the present article, these indices are analyzed and reformulated in terms of agreement statistics based on all
Keywords
Affiliated Institutions
Related Publications
Observer Reliability and Agreement
Abstract The terms observer reliability and observer agreement represent different concepts. Reliability coefficients express the ability to differentiate between subjects. Agre...
The Measurement of Observer Agreement for Categorical Data
This paper presents a general statistical methodology for the analysis of multivariate categorical data arising from observer reliability studies. The procedure essentially invo...
Self-generated validity and other effects of measurement on belief, attitude, intention, and behavior.
Drawing from recent developments in social cognition, cognitive psychology, and behavioral decision theory, we analyzed when and how the act of measuring beliefs, attitudes, int...
MCMC Methods for Multi-Response Generalized Linear Mixed Models: The<b>MCMCglmm</b><i>R</i>Package
Generalized linear mixed models provide a flexible framework for modeling a range of data, although with non-Gaussian response variables the likelihood cannot be obtained in clo...
Causal Variables, Indicator Variables and Measurement Scales: An Example from Quality of Life
Summary There is extensive literature on the development and validation of multi-item measurement scales. Much of this is based on principles derived from psychometric theory an...
Publication Info
- Year
- 1980
- Type
- article
- Volume
- 88
- Issue
- 2
- Pages
- 322-328
- Citations
- 510
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1037/0033-2909.88.2.322