Abstract

The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived and used as the basis for practical algorithms for temporal supervised learning tasks. These algorithms have (1) the advantage that they do not require a precisely defined training interval, operating while the network runs; and (2) the disadvantage that they require nonlocal communication in the network being trained and are computationally expensive. These algorithms allow networks having recurrent connections to learn complex tasks that require the retention of information over time periods having either fixed or indefinite length.

Keywords

Computer scienceArtificial neural networkAlgorithmRecurrent neural networkArtificial intelligenceBasis (linear algebra)Machine learningWake-sleep algorithmMathematicsGeneralization error

Affiliated Institutions

Related Publications

Long Short-Term Memory

Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We brief...

1997 Neural Computation 90535 citations

Non-local Neural Networks

Both convolutional and recurrent operations are building blocks that process one local neighborhood at a time. In this paper, we present non-local operations as a generic family...

2018 2018 IEEE/CVF Conference on Computer ... 10740 citations

Publication Info

Year
1989
Type
article
Volume
1
Issue
2
Pages
270-280
Citations
4324
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

4324
OpenAlex

Cite This

Ronald J. Williams, David Zipser (1989). A Learning Algorithm for Continually Running Fully Recurrent Neural Networks. Neural Computation , 1 (2) , 270-280. https://doi.org/10.1162/neco.1989.1.2.270

Identifiers

DOI
10.1162/neco.1989.1.2.270