Abstract

We present a tree-structured architecture for supervised learning. The statistical model underlying the architecture is a hierarchical mixture model in which both the mixture coefficients and the mixture components are generalized linear models (GLIM's). Learning is treated as a maximum likelihood problem; in particular, we present an Expectation-Maximization (EM) algorithm for adjusting the parameters of the architecture. We also develop an on-line learning algorithm in which the parameters are updated incrementally. Comparative simulation results are presented in the robot dynamics domain.

Keywords

Expectation–maximization algorithmAlgorithmMixture modelComputer scienceArtificial intelligenceLine (geometry)Tree (set theory)ArchitectureDomain (mathematical analysis)MathematicsMaximum likelihoodMachine learningPattern recognition (psychology)Statistics

Affiliated Institutions

Related Publications

Publication Info

Year
1994
Type
article
Volume
6
Issue
2
Pages
181-214
Citations
2555
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

2555
OpenAlex

Cite This

Michael I. Jordan, Robert A. Jacobs (1994). Hierarchical Mixtures of Experts and the EM Algorithm. Neural Computation , 6 (2) , 181-214. https://doi.org/10.1162/neco.1994.6.2.181

Identifiers

DOI
10.1162/neco.1994.6.2.181