Abstract

Feedforward neural networks trained by error backpropagation are examples of nonparametric regression estimators. We present a tutorial on nonparametric inference and its relation to neural networks, and we use the statistical viewpoint to highlight strengths and weaknesses of neural models. We illustrate the main points with some recognition experiments involving artificial data as well as handwritten numerals. In way of conclusion, we suggest that current-generation feedforward neural networks are largely inadequate for difficult problems in machine perception and machine learning, regardless of parallel-versus-serial hardware or other implementation issues. Furthermore, we suggest that the fundamental challenges in neural modeling are about representation rather than learning per se. This last point is supported by additional experiments with handwritten numerals.

Keywords

Artificial neural networkComputer scienceArtificial intelligenceFeedforward neural networkEstimatorMachine learningNonparametric statisticsRepresentation (politics)BackpropagationInferenceTime delay neural networkVariance (accounting)Feed forwardRelation (database)Pattern recognition (psychology)Data miningMathematicsStatistics

Affiliated Institutions

Related Publications

Publication Info

Year
1992
Type
article
Volume
4
Issue
1
Pages
1-58
Citations
3465
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

3465
OpenAlex

Cite This

Stuart Geman, Elie Bienenstock, René Doursat (1992). Neural Networks and the Bias/Variance Dilemma. Neural Computation , 4 (1) , 1-58. https://doi.org/10.1162/neco.1992.4.1.1

Identifiers

DOI
10.1162/neco.1992.4.1.1