Abstract

A pattern recognition concept involving first an 'invariance net' and second a 'trainable classifier' is proposed. The invariance net can be trained or designed to produce a set of outputs that are insensitive to translation, rotation, scale change, perspective change, etc., of the retinal input pattern. The outputs of the invariance net are scrambled, however. When these outputs are fed to a trainable classifier, the final outputs are descrambled and the original patterns are reproduced in standard position, orientation, scale, etc. It is expected that the same basic approach will be effective for speech recognition, where insensitivity to certain aspects of speech signals and at the same time sensitivity to other aspects of speech signals will be required. The entire recognition system is a layered network of ADALINE neurons. The ability to adapt a multilayered neural net is fundamental. An adaptation rule is proposed for layered nets which is an extension of the MADALINE rule of the 1960s. The new rule, MRII, is a useful alternative to the backpropagation algorithm.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">&gt;</ETX>

Keywords

Computer scienceArtificial neural networkClassifier (UML)Artificial intelligenceSpeech recognitionPattern recognition (psychology)BackpropagationLearning rule

Affiliated Institutions

Related Publications

Publication Info

Year
1988
Type
article
Volume
36
Issue
7
Pages
1109-1118
Citations
357
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

357
OpenAlex

Cite This

Bernard Widrow, Robin Winter, Rohan A. Baxter (1988). Layered neural nets for pattern recognition. IEEE Transactions on Acoustics Speech and Signal Processing , 36 (7) , 1109-1118. https://doi.org/10.1109/29.1638

Identifiers

DOI
10.1109/29.1638