Abstract

Many neural network learning procedures compute gradients of the errors on the output layer of units after they have settled to their final values. We describe a procedure for finding ∂E/∂w ij , where E is an error functional of the temporal trajectory of the states of a continuous recurrent network and w ij are the weights of that network. Computing these quantities allows one to perform gradient descent in the weights to minimize E. Simulations in which networks are taught to move through limit cycles are shown. This type of recurrent network seems particularly suited for temporally continuous domains, such as signal processing, control, and speech.

Keywords

Recurrent neural networkGradient descentArtificial neural networkTrajectoryState spaceLimit (mathematics)Echo state networkComputer scienceState (computer science)Space (punctuation)MathematicsArtificial intelligenceAlgorithmMathematical analysisPhysics

Affiliated Institutions

Related Publications

Publication Info

Year
1989
Type
article
Volume
1
Issue
2
Pages
263-269
Citations
674
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

674
OpenAlex

Cite This

Barak A. Pearlmutter (1989). Learning State Space Trajectories in Recurrent Neural Networks. Neural Computation , 1 (2) , 263-269. https://doi.org/10.1162/neco.1989.1.2.263

Identifiers

DOI
10.1162/neco.1989.1.2.263