Abstract

Theoretical and empirical evidence indicates that the depth of neural networks is crucial for their success. However, training becomes more difficult as depth increases, and training of very deep networks remains an open problem. Here we introduce a new architecture designed to overcome this. Our so-called highway networks allow unimpeded information flow across many layers on information highways. They are inspired by Long Short-Term Memory recurrent networks and use adaptive gating units to regulate the information flow. Even with hundreds of layers, highway networks can be trained directly through simple gradient descent. This enables the study of extremely deep and efficient architectures.

Keywords

Computer scienceTraining (meteorology)Simple (philosophy)Deep neural networksArtificial intelligenceArtificial neural networkInformation flowArchitectureDeep learningGradient descentFlow (mathematics)Distributed computingGeography

Affiliated Institutions

Related Publications

Long Short-Term Memory

Learning to store information over extended time intervals by recurrent backpropagation takes a very long time, mostly because of insufficient, decaying error backflow. We brief...

1997 Neural Computation 90535 citations

Publication Info

Year
2015
Type
preprint
Citations
1100
Access
Closed

External Links

Citation Metrics

1100
OpenAlex

Cite This

Rupesh K. Srivastava, Klaus Greff, Jürgen Schmidhuber (2015). Training Very Deep Networks. arXiv (Cornell University) .