×

zbMATH — the first resource for mathematics

Shrinkage tuning parameter selection with a diverging number of parameters. (English) Zbl 1250.62036
Summary: Contemporary statistical research frequently deals with problems involving a diverging number of parameters. For those problems, various shrinkage methods (e.g., the lasso and smoothly clipped absolute deviation) are found to be particularly useful for variable selection. Nevertheless, the desirable performances of those shrinkage methods heavily hinge on an appropriate selection of the tuning parameters. With a fixed predictor dimension, Wang and co-workers have demonstrated that the tuning parameters selected by a Bayesian information criterion type criterion can identify the true model consistently. In this work, similar results are further extended to the situation with a diverging number of parameters for both unpenalized and penalized estimators. Consequently, our theoretical results further enlarge not only the scope of applicabilityation criterion type criteria but also that of those shrinkage estimation methods.

MSC:
62J07 Ridge regression; shrinkage estimators (Lasso)
62H99 Multivariate analysis
PDF BibTeX XML Cite
Full Text: DOI
References:
[1] Bai, Spectral Analysis of Large Dimensional Random Matrices (2006)
[2] Fan, Variable selection via nonconcave penalized likelihood and its oracle properties, J. Am. Statist. Ass. 96 pp 1348– (2001) · Zbl 1073.62547
[3] Fan, Proc. Int. Congr. Mathematicians, vol. III pp 595– (2006)
[4] Fan, On non-concave penalized likelihood with diverging number of parameters, Ann. Statist. 32 pp 928– (2004)
[5] Fu, Penalized regression: the bridge versus the LASSO, J. Computnl Graph. Statist. 7 pp 397– (1998)
[6] Huang, Asymptotic properties of bridge estimators in sparse high-dimensional regression models, Ann. Statist. 36 pp 587– (2008) · Zbl 1133.62048
[7] Huang (2007)
[8] Schwarz, Estimating the dimension of a model, Ann. Statist. 6 pp 461– (1978) · Zbl 0379.62005
[9] Shao, An asymptotic theory for linear model selection, Statist. Sin. 7 pp 221– (1997) · Zbl 1003.62527
[10] Shi, Regression model selection-a residual likelihood approach, J. R. Statist. Soc. B 64 pp 237– (2002) · Zbl 1059.62074
[11] Tibshirani, Regression shrinkage and selection via the lasso, J. R. Statist. Soc. B 58 pp 267– (1996) · Zbl 0850.62538
[12] Wang, Unified LASSO estimation via least squares approximation, J. Am. Statist. Ass. 101 pp 1418– (2007) · Zbl 1306.62167
[13] Wang, Regression coefficient and autoregressive order shrinkage and selection via the lasso, J. R. Statist. Soc. B 69 pp 63– (2007)
[14] Wang, On the consistency of SCAD tuning parameter selector, Biometrika 94 pp 553– (2007)
[15] Xie, SCAD-penalized regression in high-dimensional partially linear models, Ann. Statist. (2008) · Zbl 1162.62037
[16] Yang, Can the strengths of AIC and BIC be shared?: a conflict between model identification and regression estimation, Biometrika 92 pp 937– (2005) · Zbl 1151.62301
[17] Zhang, Adaptive LASSO for Cox’s proportional hazard model, Biometrika 94 pp 691– (2007) · Zbl 1135.62083
[18] Zhao, Consistent linear model selection, Statist. Probab. Lett. 76 pp 520– (2006) · Zbl 1141.62333
[19] Zou, The adaptive LASSO and its oracle properties, J. Am. Statist. Ass. 101 pp 1418– (2006) · Zbl 1171.62326
[20] Zou, One-step sparse estimates in nonconcave penalized likelihood models (with discussion), Ann. Statist. 36 pp 1509– (2008)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.