Abstract
In model selection, usually a "best" predictor is chosen from a collection ${\\hat{\\mu}(\\cdot, s)}$ of predictors where $\\hat{\\mu}(\\cdot, s)$ is the minimum least-squares predictor in a collection $\\mathsf{U}_s$ of\npredictors. Here s is a complexity parameter; that is, the smaller s, the lower dimensional/smoother the models in $\\mathsf{U}_s$.\n¶ If $\\mathsf{L}$ is the data used to derive the sequence ${\\hat{\\mu}(\\cdot, s)}$, the procedure is called unstable if a small change in $\\mathsf{L}$ can cause large changes in ${\\hat{\\mu}(\\cdot, s)}$. With a crystal ball, one could pick the predictor in ${\\hat{\\mu}(\\cdot, s)}$ having minimum prediction error. Without prescience, one uses test sets, cross-validation and so forth. The difference in prediction error between the crystal ball selection and the statistician's choice we call predictive loss. For an unstable procedure the predictive loss is large. This is shown by some analytics in a simple case and by simulation results in a more complex comparison of four different linear regression methods. Unstable procedures can be stabilized by perturbing the data, getting a new predictor sequence ${\\hat{\\mu'}(\\cdot, s)}$ and then averaging over many such predictor sequences.
Keywords
Related Publications
The motion of long bubbles in tubes
A long bubble of a fluid of negligible viscosity is moving steadily in a tube filled with liquid of viscosity μ at small Reynolds number, the interfacial tension being σ. The an...
Sparsity and incoherence in compressive sampling
We consider the problem of reconstructing a sparse signal x^0\\in{\\bb R}^n from a limited number of linear measurements. Given m randomly selected samples of Ux0, where U is an...
Estimation in a Multivariate "Errors in Variables" Regression Model: Large Sample Results
In a multivariate "errors in variables" regression model, the unknown mean vectors $\\mathbf{u}_{1i}: p \\times 1, \\mathbf{u}_{2i}: r \\times 1$ of the vector observations $\\m...
Better Subset Regression Using the Nonnegative Garrote
A new method, called the nonnegative (nn) garrote, is proposed for doing subset regression. It both shrinks and zeroes coefficients. In tests on real and simulated data, it prod...
Better Subset Regression Using the Nonnegative Garrote
Abstract A new method, called the nonnegative (nn) garrote, is proposed for doing subset regression. It both shrinks and zeroes coefficients. In tests on real and simulated data...
Publication Info
- Year
- 1996
- Type
- article
- Volume
- 24
- Issue
- 6
- Citations
- 1141
- Access
- Closed
External Links
Social Impact
Social media, news, blog, policy document mentions
Citation Metrics
Cite This
Identifiers
- DOI
- 10.1214/aos/1032181158