Abstract
A network that develops to maximize the mutual information between its output and the signal portion of its input (which is admixed with noise) is useful for extracting salient input features, and may provide a model for aspects of biological neural network function. I describe a local synaptic Learning rule that performs stochastic gradient ascent in this information-theoretic quantity, for the case in which the input-output mapping is linear and the input signal and noise are multivariate gaussian. Feedforward connection strengths are modified by a Hebbian rule during a "learning" phase in which examples of input signal plus noise are presented to the network, and by an anti-Hebbian rule during an "unlearning" phase in which examples of noise alone are presented. Each recurrent lateral connection has two values of connection strength, one for each phase; these values are updated by an anti-Hebbian rule.
Keywords
Related Publications
Learning by maximizing the information transfer through nonlinear noisy neurons and ‘‘noise breakdown’’
The transmission of information through a nonlinear noisy neuron has been computed with the following results. The mutual information between input and output signals is, in the...
Nonlinear neurons in the low-noise limit: a factorial code maximizes information transfer
We investigate the consequences of maximizing information transfer in a simple neural network (one input layer, one output layer), focusing on the case of nonlinear transfer fun...
The predictive brain: temporal coincidence and temporal order in synaptic learning mechanisms.
Some forms of synaptic plasticity depend on the temporal coincidence of presynaptic activity and postsynaptic response. This requirement is consistent with the Hebbian, or corre...
ACTIVATION FUNCTIONS IN NEURAL NETWORKS
Artificial Neural Networks are inspired from the human brain and the network of neurons present in the brain.The information is processed and passed on from one neuron to anothe...
Training Recurrent Networks by Evolino
In recent years, gradient-based LSTM recurrent neural networks (RNNs) solved many previously RNN-unlearnable tasks. Sometimes, however, gradient information is of little use for...
Publication Info
- Year
- 1992
- Type
- article
- Volume
- 4
- Issue
- 5
- Pages
- 691-702
- Citations
- 289
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1162/neco.1992.4.5.691