Abstract
This paper explores the effect of initial weight selection on feed-forward networks learning simple functions with the back-propagation technique. We first demonstrate, through the use of Monte Carlo techniques, that the magnitude of the initial condition vector (in weight space) is a very significant parameter in convergence time variability. In order to further understand this result, additional deterministic experiments were performed. The results of these experiments demonstrate the extreme sensitivity of back propagation to initial weight configuration.
Keywords
Affiliated Institutions
Related Publications
Generalization of Back propagation to Recurrent and Higher Order Neural Networks
A general method for deriving backpropagation algorithms for networks with recurrent and higher order networks is introduced. The propagation of activation in these networks is ...
Neural Network Classifiers Estimate Bayesian <i>a posteriori</i> Probabilities
Many neural network classifiers provide outputs which estimate Bayesian a posteriori probabilities. When the estimation is accurate, network outputs can be treated as probabilit...
Decoupled extended Kalman filter training of feedforward layered networks
Presents a training algorithm for feedforward layered networks based on a decoupled extended Kalman filter (DEKF). The authors present an artificial process noise extension to D...
Stochastic power control for cellular radio systems
For wireless communication systems, iterative power control algorithms have been proposed to minimize the transmitter power while maintaining reliable communication between mobi...
A direct adaptive method for faster backpropagation learning: the RPROP algorithm
A learning algorithm for multilayer feedforward networks, RPROP (resilient propagation), is proposed. To overcome the inherent disadvantages of pure gradient-descent, RPROP perf...
Publication Info
- Year
- 1990
- Type
- article
- Volume
- 3
- Pages
- 860-867
- Citations
- 271
- Access
- Closed