×

Estimating the dimension of a model. (English) Zbl 0379.62005


MSC:

62C10 Bayesian problems; characterization of Bayes procedures
62F15 Bayesian inference
62J99 Linear inference, regression
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] Akaike H. Information theory and an extension of the maximum likelihood principle. In: Petrov B N, Caki F, eds. Second International Symposium on Information Theory. Budapest: Akademiai Kiado, 1973. 267-281 · Zbl 0283.62006
[2] Schwarz G. Estimating the dimension of a model. Ann Statist, 1978, 6: 461-464 · Zbl 0379.62005
[3] Mallows C L. Some comments on Cp. Technometrics, 1973, 15: 661-675 · Zbl 0269.62061
[4] Tibshirani R. Regression shrinkage and selection via the Lasso. J Royal Statist Soc B, 1996, 58: 267-288 · Zbl 0850.62538
[5] Donoho D L, Huo X. Uncertainty principles and ideal atomic decomposition. IEEE Trans Inf Theory, 2001, 47: 2845-2862 · Zbl 1019.94503
[6] Donoho D L, Elad E. Maximal sparsity representation via \(l_1\) minimization. Proc Natl Acal Sci, 2003, 100: 2197-2202 · Zbl 1064.94011
[7] Chen S, Donoho D L, Saunders M. Atomic decomposition by basis pursuit. SIAM Rev, 2001, 43: 129-159 · Zbl 0979.94010
[8] Fan J, Heng P. Nonconcave penalty likelihood with a diverging number of parameters. Ann Statist, 2004, 32: 928-961 · Zbl 1092.62031
[9] Zou H. The adaptive Lasso and its oracle properties. J Amer Statist Assoc, 2006, 101: 1418-1429 · Zbl 1171.62326
[10] Zou H, Hastie T. Regularization and variable selection via the elastic net. J Royal Statist Soc B, 2005, 67: 301-320 · Zbl 1069.62054
[11] Zhao P, Yu B. Stagewise Lasso. J Mach Learn Res, 2007, 8: 2701-2726 · Zbl 1222.68345
[12] Candes E, Tao T. The Dantzig selector: Statistical estimation when \(p\) is much larger than \(n\). Ann Statist, 2007, 35: 2313-2351 · Zbl 1139.62019
[13] Knight K, Fu W J. Asymptotics for lasso-type estimators. Ann Statist, 2000, 28: 1356-1378 · Zbl 1105.62357
[14] Friedman J, Hastie T, Tibshirani R. Additive logistic regression: a statistical view of boosting. Ann Statist, 2002, 28: 337-407 · Zbl 1106.62323
[15] Efron B, Haistie T, Johnstone I, et al. Least angle regression. Ann Statist, 2004, 32: 407-499 · Zbl 1091.62054
[16] Rosset S, Zhu J. Piecewise linear regularization solution paths. Ann Statist, 2007, 35: 1012-1030 · Zbl 1194.62094
[17] Kim J, Koh K, Lustig M, et al. A method for large-scale l1-regularized least squares. IEEE J Se Top Signal Process, 2007, 1: 606-617
[18] Horst R, Thoai N V. Dc programming: preview. J Optim Th, 1999, 103: 1-41
[19] Yuille A, Rangarajan A. The concave convex procedure (CCCP). NIPS, 14. Cambridge, MA: MIT Press, 2002
[20] Candes E, Wakin M, Boyd S. Enhancing sparsity by reweighted L1 minimization. J Fourier A, 2008, 14: 877-905 · Zbl 1176.94014
[21] Blake C, Merz C. Repository of Machine Learning Databases [DB/OL]. Irvine, CA: University of California, Department of Information and Computer Science, 1998
[22] Candes E, Romberg J, Tao T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans Inf Theory, 2006, 52: 489-509 · Zbl 1231.94017
[23] Donoho D L. Compressed sensing. IEEE Trans Inf Theory, 2006, 52: 1289-1306 · Zbl 1288.94016
[24] Candes E,
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.