Abstract

Recurrent neural networks can be used to map input sequences to output sequences, such as for recognition, production or prediction problems. However, practical difficulties have been reported in training recurrent neural networks to perform tasks in which the temporal contingencies present in the input/output sequences span long intervals. We show why gradient based learning algorithms face an increasingly difficult problem as the duration of the dependencies to be captured increases. These results expose a trade-off between efficient learning by gradient descent and latching on information for long periods. Based on an understanding of this problem, alternatives to standard gradient descent are considered.

Keywords

Gradient descentComputer scienceTerm (time)Artificial intelligenceStochastic gradient descentArtificial neural networkRecurrent neural networkDeep learningMachine learningFace (sociological concept)Pattern recognition (psychology)

Affiliated Institutions

Related Publications

Long Short-Term Memory

Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We brief...

1997 Neural Computation 90535 citations

Publication Info

Year
1994
Type
article
Volume
5
Issue
2
Pages
157-166
Citations
8111
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

8111
OpenAlex

Cite This

Yoshua Bengio, P. Simard, Paolo Frasconi (1994). Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks , 5 (2) , 157-166. https://doi.org/10.1109/72.279181

Identifiers

DOI
10.1109/72.279181