## High-dimensional covariance estimation by minimizing $$\ell _{1}$$-penalized log-determinant divergence.(English)Zbl 1274.62190

Summary: Given i.i.d. observations of a random vector $$X\in \mathbb R^{p}$$, we study the problem of estimating both its covariance matrix $$\Sigma ^{*}$$, and its inverse covariance or concentration matrix $$\Theta ^{*}=(\Sigma ^{*})^{ - 1}$$. When $$X$$ is multivariate Gaussian, the non-zero structure of $$\Theta ^{*}$$ is specified by the graph of an associated Gaussian Markov random field; and a popular estimator for such sparse $$\Theta ^{*}$$ is the $$\ell _{1}$$-regularized Gaussian MLE. This estimator is sensible even for for non-Gaussian $$X$$, since it corresponds to minimizing an $$\ell _{1}$$-penalized log-determinant Bregman divergence. We analyze its performance under high-dimensional scaling, in which the number of nodes in the graph $$p$$, the number of edges $$s$$, and the maximum node degree $$d$$, are allowed to grow as a function of the sample size $$n$$. In addition to the parameters $$(p,s,d)$$, our analysis identifies other key quantities that control rates: (a) the $$\ell _{\infty }$$-operator norm of the true covariance matrix $$\Sigma ^{*}$$; and (b) the $$\ell _{\infty }$$-operator norm of the sub-matrix $$\Gamma ^{*}_{SS}$$, where $$S$$ indexes the graph edges, and $$\Gamma ^{*}=(\Theta ^{*})^{ - 1}\otimes (\Theta ^{*})^{ - 1}$$; and (c) a mutual incoherence or irrepresentability measure on the matrix $$\Gamma ^{*}$$; and (d) the rate of decay $$1/f(n,\delta )$$ on the probabilities $$\{|\hat \Sigma^{n}_{ij}-\Sigma^{*}_{ij}|>\delta\}$$, where $$\hat \Sigma^{n}$$ is the sample covariance based on $$n$$ samples. Our first result establishes consistency of our estimate $$\hat \Theta$$ in the elementwise maximum-norm. This in turn allows us to derive convergence rates in Frobenius and spectral norms, with improvements upon existing results for graphs with maximum node degrees $$d=o(\sqrt{s})$$. In our second result, we show that with probability converging to one, the estimate $$\hat \Theta$$ correctly specifies the zero pattern of the concentration matrix $$\Theta^{*}$$. We illustrate our theoretical results via simulations for various graphs and problem parameters, showing good correspondences between the theoretical predictions and behavior in simulations.

### MSC:

 62F12 Asymptotic properties of parametric estimators 62F30 Parametric inference under constraints

glasso
Full Text:

### References:

 [1] P. J. Bickel and E. Levina. Covariance regularization by thresholding., Ann. Statist. , 36(6) :2577-2604, 2008. · Zbl 1196.62062 [2] P. J. Bickel and E. Levina. Regularized estimation of large covariance matrices., Ann. Statist. , 36(1):199-227, 2008. · Zbl 1132.62040 [3] S. Boyd and L. Vandenberghe., Convex optimization . Cambridge University Press, Cambridge, UK, 2004. · Zbl 1058.90049 [4] L. M. Bregman. The relaxation method for finding the common point of convex sets and its application to the solution of problems in convex programming., USSR Computational Mathematics and Mathematical Physics , 7:191-204, 1967. · Zbl 0186.23807 [5] L. D. Brown., Fundamentals of statistical exponential families . Institute of Mathematical Statistics, Hayward, CA, 1986. · Zbl 0685.62002 [6] V. V. Buldygin and Y. V. Kozachenko., Metric characterization of random variables and random processes . American Mathematical Society, Providence, RI, 2000. · Zbl 0998.60503 [7] T. T. Cai, C.-H. Zhang, and H. H. Zhou. Optimal rates of convergence for covariance matrix estimation., Annals of Statistics , 2010. · Zbl 1202.62073 [8] Y. Censor and S. A. Zenios., Parallel Optimization: Theory, Algorithms, and Applications . Numerical Mathematics and Scientific Computation. Oxford University Press, 1988. [9] A. d’Asprémont, O. Banerjee, and L. El Ghaoui. First-order methods for sparse covariance selection., SIAM J. Matrix Anal. Appl. , 30(1):56-66, 2008. · Zbl 1156.90423 [10] N. El Karoui. Operator norm consistent estimation of large dimensional sparse covariance matrices., Ann. Statist. , To appear, 2008. · Zbl 1196.62064 [11] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical Lasso., Biostat. , 9(3):432-441, 2007. · Zbl 1143.62076 [12] R. Furrer and T. Bengtsson. Estimation of high-dimensional prior and posterior covariance matrices in kalman filter variants., J. Multivar. Anal. , 98(2):227-255, 2007. · Zbl 1105.62091 [13] C. Giraud. Estimation of gaussian graph by model selection., Electronic Journal of Statistics , 2:542-563, 2008. · Zbl 1320.62094 [14] R. A. Horn and C. R. Johnson., Matrix Analysis . Cambridge University Press, Cambridge, 1985. · Zbl 0576.15001 [15] J. Z. Huang, N. Liu, M. Pourahmadi, and L. Liu. Covariance matrix selection and estimation via penalised normal likelihood., Biometrika , 93(1):85-98, 2006. · Zbl 1152.62346 [16] I. M. Johnstone. On the distribution of the largest eigenvalue in principal components analysis., Ann. Statist. , 29(2):295-327, 2001. · Zbl 1016.62078 [17] I. M. Johnstone and A. Y. Lu. Sparse principal components analysis., Unpublished Manuscript , 2004. [18] C. Lam and J. Fan. Sparsistency and rates of convergence in large covariance matrix estimation., Annals of Statistics , 37 :4254-4278, 2009. · Zbl 1191.62101 [19] O. Ledoit and M. Wolf. A well-conditioned estimator for large-dimensional covariance matrices., J. Multivar. Anal. , 88:365-411, 2003. · Zbl 1032.62050 [20] M. Ledoux., The Concentration of Measure Phenomenon . Mathematical Surveys and Monographs. American Mathematical Society, Providence, RI, 2001. · Zbl 0995.60002 [21] N. Meinshausen. A note on the Lasso for graphical Gaussian model selection., Statistics and Probability Letters , 78(7): 880-884, 2008. · Zbl 1144.62326 [22] N. Meinshausen and P. Bühlmann. High-dimensional graphs and variable selection with the Lasso., Ann. Statist. , 34(3) :1436-1462, 2006. · Zbl 1113.62082 [23] J. M. Ortega and W. G. Rheinboldt., Iterative Solution of Nonlinear Equations in Several Variables . Academic Press, NY, 1970. · Zbl 0241.65046 [24] V. V. Petrov., Limit Theorems Of Probability Theory: Sequences Of Independent Random Variables . Oxford University Press, Oxford, UK, 1995. · Zbl 0826.60001 [25] P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation: Convergence rates of, \ell 1 -regularized log-determinant divergence. Technical report, Department of Statistics, UC Berkeley, September 2008. · Zbl 1274.62190 [26] H. P. Rosenthal. On the subspaces of, l p ( p > 2) spanned by sequences of independent random variables. Israel J. Math , 8 :1546-1570, 1970. · Zbl 0213.19303 [27] A. J. Rothman, P. J. Bickel, E. Levina, and J. Zhu. Sparse permutation invariant covariance estimation., Electron. J. Statist. , 2:494-515, 2008. · Zbl 1320.62135 [28] J. A. Tropp. Just relax: Convex programming methods for identifying sparse signals., IEEE Trans. Info. Theory , 51(3): 1030-1051, 2006. · Zbl 1288.94025 [29] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using, \ell 1 -constrained quadratic programming (Lasso). IEEE Trans. Information Theory , 55 :2183-2202, May 2009. · Zbl 1367.62220 [30] W. B. Wu and M. Pourahmadi. Nonparametric estimation of large covariance matrices of longitudinal data., Biometrika , 90(4):831-844, 2003. · Zbl 1436.62347 [31] M. Yuan and Y. Lin. Model selection and estimation in the Gaussian graphical model., Biometrika , 94(1):19-35, 2007. · Zbl 1142.62408 [32] P. Zhao and B. Yu. On model selection consistency of Lasso., Journal of Machine Learning Research , 7 :2541-2567, 2006. · Zbl 1222.62008 [33] S. Zhou, J. Lafferty, and L. Wasserman. Time-varying undirected graphs. In, 21st Annual Conference on Learning Theory (COLT) , Helsinki, Finland, July 2008.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.