Abstract
Basic to many psychological investigations is the question of agreement between observers who independently categorize people. Several recent studies have proposed measures of agreement when a set of nominal scale categories has been predefined and imposed on two observers. This study, in contrast, develops a measure of agreement for settings where observers independently define their own categories. Thus it is possible for observers to delineate different numbers of categories, with different names. Computational formulae for the mean and variance of the proposed measure of agreement are given; further, a statistic with a large‐sample normal distribution is suggested for testing the null hypothesis of random agreement. A computer‐based comparison of the large‐sample approximation with the exact distribution of the test statistic shows a generally good fit, even for moderate sample sizes. Finally, a worked example involving two psychologists' classifications of children illustrates the computations.
Keywords
Affiliated Institutions
Related Publications
Inference on the Order of a Normal Mixture
Finite normal mixture models are used in a wide range of applications. Hypothesis testing on the order of the normal mixture is an important yet unsolved problem. Existing proce...
Testing for a Finite Mixture Model with Two Components
Summary We consider a finite mixture model with k components and a kernel distribution from a general one-parameter family. The problem of testing the hypothesis k=2 versusk⩾3 i...
On a Test of Whether one of Two Random Variables is Stochastically Larger than the Other
Let $x$ and $y$ be two random variables with continuous cumulative distribution functions $f$ and $g$. A statistic $U$ depending on the relative ranks of the $x$'s and $y$'s is ...
Agreement between observers when the categories are not specified in advance
To test the agreement between two observers who categorize a number of objects when the categories have not been specified in advance, Brennan & Light (1974) developed a sta...
Likelihood Ratio Tests for Model Selection and Non-Nested Hypotheses
In this paper, we develop a classical approach to model selection. Using the Kullback-Leibler Information Criterion to measure the closeness of a model to the truth, we propose ...
Publication Info
- Year
- 1974
- Type
- article
- Volume
- 27
- Issue
- 2
- Pages
- 154-163
- Citations
- 72
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1111/j.2044-8317.1974.tb00535.x