Abstract

One of the surprising recurring phenomena observed in experiments\nwith boosting is that the test error of the generated classifier usually does\nnot increase as its size becomes very large, and often is observed to decrease\neven after the training error reaches zero. In this paper, we show that this\nphenomenon is related to the distribution of margins of the training\nexamples with respect to the generated voting classification rule, where the\nmargin of an example is simply the difference between the number of correct\nvotes and the maximum number of votes received by any incorrect label. We show\nthat techniques used in the analysis of Vapnik’s support vector\nclassifiers and of neural networks with small weights can be applied to voting\nmethods to relate the margin distribution to the test error. We also show\ntheoretically and experimentally that boosting is especially effective at\nincreasing the margins of the training examples. Finally, we compare our\nexplanation to those based on the bias-variance decomposition.

Keywords

Boosting (machine learning)MathematicsVotingMargin (machine learning)Pattern recognition (psychology)Artificial intelligenceSupport vector machineMachine learningClassifier (UML)StatisticsStatistical hypothesis testingMajority ruleEconometricsComputer science

Affiliated Institutions

Related Publications

Publication Info

Year
1998
Type
article
Volume
26
Issue
5
Citations
2310
Access
Closed

External Links

Social Impact

Social media, news, blog, policy document mentions

Citation Metrics

2310
OpenAlex

Cite This

Peter L. Bartlett, Yoav Freund, Wee Sun Lee et al. (1998). Boosting the margin: a new explanation for the effectiveness of voting methods. The Annals of Statistics , 26 (5) . https://doi.org/10.1214/aos/1024691352

Identifiers

DOI
10.1214/aos/1024691352