Abstract

We present a new supervised learning procedure for systems composed of many separate networks, each of which learns to handle a subset of the complete set of training cases. The new procedure can be viewed either as a modular version of a multilayer supervised network, or as an associative version of competitive learning. It therefore provides a new link between these two apparently different approaches. We demonstrate that the learning procedure divides up a vowel discrimination task into appropriate subtasks, each of which can be solved by a very simple expert network.

Keywords

Computer scienceSimple (philosophy)Artificial intelligenceTask (project management)Set (abstract data type)Associative propertyMachine learningModular designArtificial neural networkCompetitive learningSupervised learningAssociative learningMathematicsPsychology

Affiliated Institutions

Related Publications

Publication Info

Year
1991
Type
article
Volume
3
Issue
1
Pages
79-87
Citations
4563
Access
Closed

External Links

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

4563
OpenAlex

Cite This

Robert A. Jacobs, Michael I. Jordan, Steven J. Nowlan et al. (1991). Adaptive Mixtures of Local Experts. Neural Computation , 3 (1) , 79-87. https://doi.org/10.1162/neco.1991.3.1.79

Identifiers

DOI
10.1162/neco.1991.3.1.79