Abstract

In this paper, we investigate the capabilities of local feedback multilayered networks, a particular class of recurrent networks, in which feedback connections are only allowed from neurons to themselves. In this class, learning can be accomplished by an algorithm that is local in both space and time. We describe the limits and properties of these networks and give some insights on their use for solving practical problems.

Keywords

Class (philosophy)Computer scienceSpace (punctuation)Artificial intelligenceTheoretical computer scienceMathematics

Related Publications

Long Short-Term Memory

Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We brief...

1997 Neural Computation 90535 citations

DeepWalk

We present DeepWalk, a novel approach for learning latent representations of\nvertices in a network. These latent representations encode social relations in\na continuous vector...

2014 8168 citations

Publication Info

Year
1992
Type
article
Volume
4
Issue
1
Pages
120-130
Citations
194
Access
Closed

External Links

Social Impact

Altmetric

Social media, news, blog, policy document mentions

Citation Metrics

194
OpenAlex

Cite This

Paolo Frasconi, Marco Gori, G. Soda (1992). Local Feedback Multilayered Networks. Neural Computation , 4 (1) , 120-130. https://doi.org/10.1162/neco.1992.4.1.120

Identifiers

DOI
10.1162/neco.1992.4.1.120