Abstract
This article reviews statistical techniques for combining multiple probability distributions. The framework is that of a decision maker who consults several experts regarding some events. The experts express their opinions in the form of probability distributions. The decision maker must aggregate the experts' distributions into a single distribution that can be used for decision making. Two classes of aggregation methods are reviewed. When using a supra Bayesian procedure, the decision maker treats the expert opinions as data that may be combined with its own prior distribution via Bayes' rule. When using a linear opinion pool, the decision maker forms a linear combination of the expert opinions. The major feature that makes the aggregation of expert opinions difficult is the high correlation or dependence that typically occurs among these opinions. A theme of this paper is the need for training procedures that result in experts with relatively independent opinions or for aggregation methods that implicitly or explicitly model the dependence among the experts. Analyses are presented that show that m dependent experts are worth the same as k independent experts where k ≤ m. In some cases, an exact value for k can be given; in other cases, lower and upper bounds can be placed on k.
Keywords
Affiliated Institutions
Related Publications
Training Products of Experts by Minimizing Contrastive Divergence
It is possible to combine multiple latent-variable models of the same data by multiplying their probability distributions together and then renormalizing. This way of combining ...
Inference from Iterative Simulation Using Multiple Sequences
The Gibbs sampler, the algorithm of Metropolis and similar iterative simulation methods are potentially very helpful for summarizing multivariate distributions. Used naively, ho...
Wald Lecture: On the Bernstein-von Mises theorem with infinite-dimensional parameters
If there are many independent, identically distributed\nobservations governed by a smooth, finite-dimensional statistical model, the\nBayes estimate and the maximum likelihood e...
Calibrated Tree Priors for Relaxed Phylogenetics and Divergence Time Estimation
The use of fossil evidence to calibrate divergence time estimation has a long history. More recently, Bayesian Markov chain Monte Carlo has become the dominant method of diverge...
Linear Models and Empirical Bayes Methods for Assessing Differential Expression in Microarray Experiments
The problem of identifying differentially expressed genes in designed microarray experiments is considered. Lonnstedt and Speed (2002) derived an expression for the posterior od...
Publication Info
- Year
- 1995
- Type
- review
- Volume
- 7
- Issue
- 5
- Pages
- 867-888
- Citations
- 369
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1162/neco.1995.7.5.867