×

D-trace estimation of a precision matrix using adaptive lasso penalties. (English) Zbl 1414.62224

Summary: The accurate estimation of a precision matrix plays a crucial role in the current age of high-dimensional data explosion. To deal with this problem, one of the prominent and commonly used techniques is the \(\ell_1\) norm (Lasso) penalization for a given loss function. This approach guarantees the sparsity of the precision matrix estimate for properly selected penalty parameters. However, the \(\ell_1\) norm penalization often fails to control the bias of obtained estimator because of its overestimation behavior. In this paper, we introduce two adaptive extensions of the recently proposed \(\ell_1\) norm penalized D-trace loss minimization method. They aim at reducing the produced bias in the estimator. Extensive numerical results, using both simulated and real datasets, show the advantage of our proposed estimators.

MSC:

62H30 Classification and discrimination; cluster analysis (statistical aspects)
62J10 Analysis of variance and covariance (ANOVA)
65S05 Graphical methods in numerical analysis

Software:

glasso
PDFBibTeX XMLCite
Full Text: DOI Link

References:

[1] Anderson TW (2003) An introduction to multivariate statistical analysis. Wiley-Interscience, New York · Zbl 1039.62044
[2] Banerjee O, El Ghaoui L, d’Aspremont A (2008) Model selection through sparse maximum likelihood estimation for multivariate Gaussian or binary data. J Mach Learn Res 9:485-516 · Zbl 1225.68149
[3] Banerjee S, Ghosal S (2015) Bayesian structure learning in graphical models. J Multivar Anal 136:147-162 · Zbl 1308.62119 · doi:10.1016/j.jmva.2015.01.015
[4] Bickel PJ, Levina E (2008) Regularized estimation of large covariance matrices. Ann Stat 36(1):199-227 · Zbl 1132.62040 · doi:10.1214/009053607000000758
[5] Cai T, Liu W, Luo X (2011) A constrained \[{\ell_1}\] ℓ1 minimization approach to sparse precision matrix estimation. J Am Stat Assoc 106(494):594-607 · Zbl 1232.62087 · doi:10.1198/jasa.2011.tm10155
[6] Cai T, Yuan M (2012) Adaptive covariance matrix estimation through block thresholding. Ann Stat 40(4):2014-2042 · Zbl 1257.62060 · doi:10.1214/12-AOS999
[7] Cui Y, Leng C, Sun D (2016) Sparse estimation of high-dimensional correlation matrices. Comput Stat Data Anal 93:390-403 · Zbl 1468.62044 · doi:10.1016/j.csda.2014.10.001
[8] d’Aspremont A, Banerjee O, Ghaoui L (2008) First-order methods for sparse covariance selection. SIAM J Matrix Anal Appl 30:56-66 · Zbl 1156.90423 · doi:10.1137/060670985
[9] Dempster A (1972) Covariance selection. Biometrics 28(1):157-175 · doi:10.2307/2528966
[10] Deng X, Tsui K (2013) Penalized covariance matrix estimation using a matrix-logarithm transformation. J Comput Graph Stat 22(2):494-512 · doi:10.1080/10618600.2012.715556
[11] Duchi J, Gould S, Koller D (2008) Projected subgradient methods for learning sparse Gaussians. In: Proceeding of the 24th conference on uncertainty in artificial intelligence, pp 153-160. arXiv:1206.3249
[12] El Karoui N (2008) Operator norm consistent estimation of large-dimensional sparse covariance matrices. Ann Appl Stat 36(6):2717-2756 · Zbl 1196.62064 · doi:10.1214/07-AOS559
[13] Fan J, Feng J, Wu Y (2009) Network exploration via the adaptive Lasso and SCAD penalties. Ann Appl Stat 3(2):521-541 · Zbl 1166.62040 · doi:10.1214/08-AOAS215
[14] Fan J, Li R (2001) Variable selection via nonconcave penalized likelihood and its oracle properties. J Am Stat Assoc 96:1348-1360 · Zbl 1073.62547 · doi:10.1198/016214501753382273
[15] Frahm G, Memmel C (2010) Dominating estimator for minimum-variance portfolios. J Econom 159:289-302 · Zbl 1441.62264 · doi:10.1016/j.jeconom.2010.07.007
[16] Friedman J, Hastie T, Tibshirani R (2008) Sparse inverse covariance estimation with the graphical Lasso. Biostatistics 9(3):432-441 · Zbl 1143.62076 · doi:10.1093/biostatistics/kxm045
[17] Goto S, Xu Y (2015) Improving mean variance optimization through sparse hedging restrictions. J Finan Quant Anal 50(06):1415-1441 · doi:10.1017/S0022109015000526
[18] Haff LR (1980) Estimation of the inverse covariance matrix: random mixtures of the inverse Wishart matrix and the identity. Ann Stat 8(3):586-597 · Zbl 0441.62045 · doi:10.1214/aos/1176345010
[19] Hsieh C-J, Dhillon IS, Ravikumar PK, Sustik MA (2011) Sparse inverse covariance matrix estimation using quadratic approximation. In: Advances in neural information processing systems, vol 24, pp 2330-2338
[20] Huang S, Li J, Sun L, Ye J, Fleisher A, Wu T, Chen K, Reiman E (2010) Learning brain connectivity of Alzheimer’s disease by sparse inverse covariance estimation. NeuroImage 50:935-949 · doi:10.1016/j.neuroimage.2009.12.120
[21] Johnstone IM (2001) On the distribution of the largest eigenvalue in principal component analysis. Ann Stat 29(3):295-327 · Zbl 1016.62078 · doi:10.1214/aos/1009210544
[22] Jorissen RN, Lipton L, Gibbs P, Chapman M, Desai J, Jones IT, Yeatman TJ, East P, Tomlinson IP, Verspaget HW, Aaltonen LA, Kruhøffer M, Orntoft TF, Andersen CL, Sieber OM (2008) DNA copy-number alterations underlie gene expression differences between microsatellite stable and unstable colorectal cancers. Clin Cancer Res 14(24):8061-8069 · doi:10.1158/1078-0432.CCR-08-1431
[23] Kourtis A, Dotsis G, Markellos N (2012) Parameter uncertainty in portfolio selection: shrinking the inverse covariance matrix. J Bank Finan 36:2522-2531 · doi:10.1016/j.jbankfin.2012.05.005
[24] Kuerer HM, Newman LA, Smith TL, Ames FC, Hunt KK, Dhingra K, Theriault RL, Singh G, Binkley SM, Sneige N, Buchholz TA, Ross MI, McNeese MD, Buzdar AU, Hortobagyi GN, Singletary SE (1999) Clinical course of breast cancer patients with complete pathologic primary tumor and axillary lymph node response to doxorubicin-based neoadjuvant chemotherapy. J Clin Oncol 17(2):460-469 · doi:10.1200/JCO.1999.17.2.460
[25] Lam C, Fan J (2009) Sparsistency and rates of convergence in large covariance matrix estimation. Ann Stat 37(6B):4254 · Zbl 1191.62101 · doi:10.1214/09-AOS720
[26] Lauritzen S (1996) Graphical models. Clarendon Press, Oxford · Zbl 0907.62001
[27] Ledoit O, Wolf M (2004) A well-conditioned estimator for large-dimensional covariance matrices. J Multivar Anal 88:365-411 · Zbl 1032.62050 · doi:10.1016/S0047-259X(03)00096-4
[28] Ledoit O, Wolf M (2012) Nonlinear shrinkage estimation of large-dimensional covariance matrices. Ann Stat 40(2):1024-1060 · Zbl 1274.62371 · doi:10.1214/12-AOS989
[29] Mardia KV, Kent JT, Bibby JM (1979) Multivariate analysis. Academic Press, New York · Zbl 0432.62029
[30] Matthews BW (1975) Comparison of the predicted and observed secondary structure of T4 phage lysozyme. Biochim Biophys Acta 405:442-451 · doi:10.1016/0005-2795(75)90109-9
[31] Maurya A (2014) A joint convex penalty for inverse covariance matrix estimation. Comput Stat Data Anal 75:15-27 · Zbl 1506.62128 · doi:10.1016/j.csda.2014.01.015
[32] McLachlan S (2004) Discriminant analysis and statistical pattern recognition. Wiley, New Jersey · Zbl 1108.62317
[33] Meinshausen N (2007) Relaxed Lasso. Comput Stat Data Anal 52(1):374-393 · Zbl 1452.62522 · doi:10.1016/j.csda.2006.12.019
[34] Meinshausen N, Bühlmann P (2006) High-dimensional graphs and variable selection with the Lasso. Ann Stat 34(2):1436-1462 · Zbl 1113.62082 · doi:10.1214/009053606000000281
[35] Nguyen TD, Welsch RE (2010) Outlier detection and robust covariance estimation using mathematical programming. Adv Data Anal Classif 4(4):301-334 · Zbl 1284.62057 · doi:10.1007/s11634-010-0070-7
[36] Ravikumar P, Wainwright M, Raskutti G, Yu B (2011) High-dimensional covariance estimation by minimizing \[\ell_1\] ℓ1-penalized log-determinant divergence. Electr J Stat 5:935-980 · Zbl 1274.62190 · doi:10.1214/11-EJS631
[37] Rothman A, Bickel P, Levina E (2009) Generalized thresholding of large covariance matrices. J Am Stat Assoc 104(485):177-186 · Zbl 1388.62170 · doi:10.1198/jasa.2009.0101
[38] Rothman A, Bickel P, Levina E, Zhu J (2008) Sparse permutation invariant covariance estimation. Electr J Stat 2:494-515 · Zbl 1320.62135 · doi:10.1214/08-EJS176
[39] Rothman AJ (2012) Positive definite estimators of large covariance matrices. Biometrika 99(2):733-740 · Zbl 1437.62595 · doi:10.1093/biomet/ass025
[40] Ryali S, Chen T, Supekar K, Menon V (2012) Estimation of functional connectivity in fMRI data using stability selection-based sparse partial correlation with elastic net penalty. NeuroImage 59(4):3852-3861 · doi:10.1016/j.neuroimage.2011.11.054
[41] Schafer J, Strimmer K (2005) A shrinkage approach to large-scale covariance matrix estimation and implications for functional genomics. Stat Appl Genet Mol Biol 4(1):Article 32
[42] Scheinberg K, Ma S, Goldfarb D (2010) Sparse inverse covariance selection via alternating linearization methods. In: Advances in neural information processing systems, vol 23, pp 2101-2109
[43] Shi L, Reid LH, Jones WD, Shippy R, Warrington JA, Baker SC, Collins PJ, deLongueville F, Kawasaki ES, Lee KY, Luo Y, Sun YA, Willey JC, Setterquist RA, Fischer GM, Tong W, Dragan YP, Dix DJ, Frueh FW, Goodsaid FM, Herman D, Jensen RV, Johnson CD, Lobenhofer EK, Puri RK, Scherf U, Thierry-Mieg J, Wang C, Wilson M, Wolber PK (2010) The microarray quality control (MAQC)-II study of common practices for the development and validation of microarray-based predictive models. Nat Biotechnol 28(8):827-838 · doi:10.1038/nbt.1665
[44] Stifanelli PF, Creanza TM, Anglani R, Liuzzi VC, Mukherjee S, Schena FP, Ancona N (2013) A comparative study of covariance selection models for the inference of gene regulatory networks. J Biomed Inf 46:894-904 · doi:10.1016/j.jbi.2013.07.002
[45] Tibshirani R (1996) Regression shrinkage and selection via the Lasso. J R Stat Soc 58(1):267-288 · Zbl 0850.62538
[46] Touloumis A (2015) Nonparametric Stein-type shrnikage covariance matrix estimators in high-dimensional settings. Comput Stat Data Anal 83:251-261 · Zbl 1507.62168 · doi:10.1016/j.csda.2014.10.018
[47] van de Geer S, Buhlmann P, Zhou S (2010) The adaptive and the thresholded Lasso for potentially misspecified models. arXiv preprint arXiv:1001.5176 · Zbl 1274.62471
[48] Wang Y, Daniels MJ (2014) Computationally efficient banding of large covariance matrices for ordered data and connections to banding the inverse Cholesky factor. J Multivar Anal 130:21-26 · Zbl 1292.62082 · doi:10.1016/j.jmva.2014.04.026
[49] Warton DI (2008) Penalized normal likelihood and ridge regularization of correlation and covariance matrices. J Am Stat Assoc 103(481):340-349 · Zbl 1471.62362 · doi:10.1198/016214508000000021
[50] Whittaker J (1990) Graphical models in applied multivariate statistics. Wiley, Chichester · Zbl 0732.62056
[51] Witten DM, Friedman JH, Simon N (2011) New insights and faster computations for the graphical Lasso. J Comput Graph Stat 20(4):892-900 · doi:10.1198/jcgs.2011.11051a
[52] Xue L, Ma S, Zou H (2012) Positive-definite \[\ell_1\] ℓ1-penalized estimation of large covariance matrices. J Am Stat Assoc 107(500):1480-1491 · Zbl 1258.62063 · doi:10.1080/01621459.2012.725386
[53] Yin J, Li J (2013) Adjusting for high-dimensional covariates in sparse precision matrix estimation by \[\ell_1\] ℓ1-penalization. J Multivar Anal 116:365-381 · Zbl 1277.62146 · doi:10.1016/j.jmva.2013.01.005
[54] Yuan M (2010) High dimensional inverse covariance matrix estimation via linear programming. J Mach Learn Res 11:2261-2286 · Zbl 1242.62043
[55] Yuan M, Lin Y (2007) Model selection and estimation in the Gaussian graphical model. Biometrika 94(1):19-35 · Zbl 1142.62408 · doi:10.1093/biomet/asm018
[56] Zerenner T, Friederichs P, Lehnertz K, Hense A (2014) A Gaussian graphical model approach to climate networks. Chaos: an interdisciplinary. J Nonlinear Sci 24(2):023103 · Zbl 1345.86008
[57] Zhang C-H, Huang J (2008) The sparsity and bias of the Lasso selection in high-dimensional linear regression. Ann Stat 36(4):1567-1594 · Zbl 1142.62044
[58] Zhang T, Zou H (2014) Sparse precision matrix estimation via Lasso penalized D-trace loss. Biometrika 88:1-18 · Zbl 1285.62063
[59] Zou H (2006) The adaptive Lasso and its oracle properties. J Am Stat Assoc 101(476):1418-1429 · Zbl 1171.62326 · doi:10.1198/016214506000000735
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.