Abstract

Significance While breakthroughs in machine learning and artificial intelligence are changing society, our fundamental understanding has lagged behind. It is traditionally believed that fitting models to the training data exactly is to be avoided as it leads to poor performance on unseen data. However, powerful modern classifiers frequently have near-perfect fit in training, a disconnect that spurred recent intensive research and controversy on whether theory provides practical insights. In this work, we show how classical theory and modern practice can be reconciled within a single unified performance curve and propose a mechanism underlying its emergence. We believe this previously unknown pattern connecting the structure and performance of learning architectures will help shape design and understanding of learning algorithms.

Keywords

Variance (accounting)EconomicsEconometricsArtificial intelligenceMachine learningComputer scienceAccounting

Affiliated Institutions

Related Publications

An Overview of Overfitting and its Solutions

Overfitting is a fundamental issue in supervised machine learning which prevents us from perfectly generalizing the models to well fit observed data on training data, as well as...

2019 Journal of Physics Conference Series 2055 citations

Publication Info

Year
2019
Type
article
Volume
116
Issue
32
Pages
15849-15854
Citations
1391
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

1391
OpenAlex

Cite This

Mikhail Belkin, Daniel Hsu, Siyuan Ma et al. (2019). Reconciling modern machine-learning practice and the classical bias–variance trade-off. Proceedings of the National Academy of Sciences , 116 (32) , 15849-15854. https://doi.org/10.1073/pnas.1903070116

Identifiers

DOI
10.1073/pnas.1903070116