×

zbMATH — the first resource for mathematics

The Lasso problem and uniqueness. (English) Zbl 1337.62173
Summary: The lasso is a popular tool for sparse linear regression, especially for problems in which the number of variables \(p\) exceeds the number of observations \(n\). But when \(p>n\), the lasso criterion is not strictly convex, and hence it may not have a unique minimizer. An important question is: when is the lasso solution well-defined (unique)? We review results from the literature, which show that if the predictor variables are drawn from a continuous probability distribution, then there is a unique lasso solution with probability one, regardless of the sizes of \(n\) and \(p\). We also show that this result extends easily to \(\ell_{1}\) penalized minimization problems over a wide range of loss functions.
A second important question is: how can we manage the case of non-uniqueness in lasso solutions? In light of the aforementioned result, this case really only arises when some of the predictor variables are discrete, or when some post-processing has been performed on continuous predictor measurements. Though we certainly cannot claim to provide a complete answer to such a broad question, we do present progress towards understanding some aspects of non-uniqueness. First, we extend the LARS algorithm for computing the lasso solution path to cover the non-unique case, so that this path algorithm works for any predictor matrix. Next, we derive a simple method for computing the component-wise uncertainty in lasso solutions of any given problem instance, based on linear programming. Finally, we review results from the literature on some of the unifying properties of lasso solutions, and also point out particular forms of solutions that have distinctive properties.

MSC:
62J07 Ridge regression; shrinkage estimators (Lasso)
62J05 Linear regression; mixed models
65C60 Computational problems in statistics (MSC2010)
Software:
PDCO
PDF BibTeX XML Cite
Full Text: DOI Euclid arXiv
References:
[1] Bickel, P., Ritov, Y. and Tsybakov, A. (2009), ‘Simultaneous analysis of lasso and Dantzig selector’, Annals of Statistics 37 (4), 1705-1732. · Zbl 1173.62022 · doi:10.1214/08-AOS620
[2] Candes, E. J. and Plan, Y. (2009), ‘Near ideal model selection by \(\ell_{1}\) minimization’, Annals of Statistics 37 (5), 2145-2177. · Zbl 1173.62053 · doi:10.1214/08-AOS653
[3] Chen, S., Donoho, D. L. and Saunders, M. (1998), ‘Atomic decomposition for basis pursuit’, SIAM Journal on Scientific Computing 20 (1), 33-61. · Zbl 0919.94002 · doi:10.1137/S1064827596304010
[4] Donoho, D. L. (2006), ‘For most large underdetermined systems of linear equations, the minimal \(\ell_{1}\) solution is also the sparsest solution’, Communications on Pure and Applied Mathematics 59 (6), 797-829. · Zbl 1113.15004 · doi:10.1002/cpa.20132
[5] Dossal, C. (2012), ‘A necessary and sufficient condition for exact sparse recovery by \(\ell_{1}\) minimization’, Comptes Rendus Mathematique 350 (1-2), 117-120. · Zbl 1236.94028
[6] Efron, B., Hastie, T., Johnstone, I. and Tibshirani, R. (2004), ‘Least angle regression’, Annals of Statistics 32 (2), 407-499. · Zbl 1091.62054 · doi:10.1214/009053604000000067
[7] Fuchs, J. J. (2005), ‘Recovery of exact sparse representations in the presense of bounded noise’, IEEE Transactions on Information Theory 51 (10), 3601-3608. · Zbl 1286.94031
[8] Fukuda, K., Liebling, T. M. and Margot, F. (1997), ‘Analysis of backtrack algorithms for listing all vertices and all faces of a convex polyhedron’, Computational Geometry: Theory and Applications 8 (1), 1-12. · Zbl 1133.68462 · doi:10.1016/0925-7721(95)00049-6
[9] Koltchinskii, V. (2009 a ), ‘The Dantzig selector and sparsity oracle inequalities’, Bernoulli 15 (3), 799-828. · Zbl 1452.62486
[10] Koltchinskii, V. (2009 b ), ‘Sparsity in penalized empirical risk minimization’, Annales de l’Institut Henri Poincare, Probabilites et Statistiques 45 (1), 7-57. · Zbl 1168.62044 · doi:10.1214/07-AIHP146 · eudml:78023
[11] Mairal, J. and Yu, B. (2012), ‘Complexity analysis of the lasso regularization path’, Proceedings of the International Conference on Machine Learning 29 .
[12] Negahban, S., Ravikumar, P., Wainwright, M. J. and Yu, B. (2012), A unified framework for high-dimensional analysis of \(M\)-estimators with decomposable regularizers. To appear in Statistical Science. · Zbl 1331.62350
[13] Osborne, M., Presnell, B. and Turlach, B. (2000 a ), ‘A new approach to variable selection in least squares problems’, IMA Journal of Numerical Analysis 20 (3), 389-404. · Zbl 0962.65036 · doi:10.1093/imanum/20.3.389
[14] Osborne, M., Presnell, B. and Turlach, B. (2000 b ), ‘On the lasso and its dual’, Journal of Computational and Graphical Statistics 9 (2), 319-337.
[15] Rockafellar, R. T. (1970), Convex Analysis , Princeton University Press, Princeton. · Zbl 0193.18401
[16] Rosset, S., Zhu, J. and Hastie, T. (2004), ‘Boosting as a regularized path to a maximum margin classifier’, Journal of Machine Learning Research 5 , 941-973. · Zbl 1222.68290 · www.jmlr.org
[17] Tibshirani, R. (1996), ‘Regression shrinkage and selection via the lasso’, Journal of the Royal Statistical Society: Series B 58 (1), 267-288. · Zbl 0850.62538
[18] Tibshirani, R. J. (2011), The Solution Path of the Generalized Lasso, PhD thesis, Department of Statistics, Stanford University. · Zbl 1234.62107
[19] Tibshirani, R. J. and Taylor, J. (2011), Proofs and technical details for “The solution path of the generalized lasso”. · Zbl 1234.62107 · www.stat.cmu.edu
[20] Tibshirani, R. J. and Taylor, J. (2012), ‘Degrees of freedom in lasso problems’, Annals of Statistics 40 (2), 1198-1232. · Zbl 1274.62469
[21] van de Geer, S. and Buhlmann, P. (2009), ‘On the conditions used to prove oracle results for the lasso’, Electronic Journal of Statistics 3 , 1360-1392. · Zbl 1327.62425
[22] Wainwright, M. J. (2009), ‘Sharp thresholds for high-dimensional and noisy sparsity recovery using \(\ell_{1}\)-constrained quadratic programming (lasso)’, IEEE Transactions on Information Theory 55 (5), 2183-2202. · Zbl 1367.62220
[23] Zhao, P. and Yu, B. (2006), ‘On model selection consistency of lasso’, Journal of Machine Learning Research 7 , 2541-2564. · Zbl 1222.62008 · www.jmlr.org
[24] Zou, H. and Hastie, T. (2005), ‘Regularization and variable selection via the elastic net’, Journal of the Royal Statistical Society: Series B 67 (2), 301-320. · Zbl 1069.62054 · doi:10.1111/j.1467-9868.2005.00503.x
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.