Abstract

We introduce a constructive, incremental learning system for regression problems that models data by means of locally linear experts. In contrast to other approaches, the experts are trained independently and do not compete for data during learning. Only when a prediction for a query is required do the experts cooperate by blending their individual predictions. Each expert is trained by minimizing a penalized local cross validation error using second order methods. In this way, an expert is able to find a local distance metric by adjusting the size and shape of the receptive field in which its predictions are valid, and also to detect relevant input features by adjusting its bias on the importance of individual input dimensions. We derive asymptotic results for our method. In a variety of simulations the properties of the algorithm are demonstrated with respect to interference, learning speed, prediction accuracy, feature detection, and task oriented incremental learning.

Keywords

Computer scienceBoosting (machine learning)Machine learningArtificial intelligenceDecision treeContrast (vision)Gradient boostingConstructiveMetric (unit)Field (mathematics)Feature (linguistics)RegressionData miningMathematicsStatisticsRandom forestProcess (computing)

Affiliated Institutions

Related Publications

Bagging, boosting, and C4.S

Breiman's bagging and Freund and Schapire's boosting are recent methods for improving the predictive power of classifier learning systems. Both form a set of classifiers that ar...

1996 National Conference on Artificial Int... 1262 citations

Publication Info

Year
1995
Type
article
Volume
8
Pages
479-485
Citations
231
Access
Closed

External Links

Citation Metrics

231
OpenAlex

Cite This

Harris Drucker, Corinna Cortes (1995). Boosting Decision Trees. Neural Information Processing Systems , 8 , 479-485.