Computationally efficient banding of large covariance matrices for ordered data and connections to banding the inverse Cholesky factor. (English) Zbl 1292.62082

Summary: We propose a computationally efficient approach to estimate (large) \(p\)-dimensional covariance matrices of ordered (or longitudinal) data based on an independent sample of size \(n\). To do this, we construct the estimator based on a \(k\)-band partial autocorrelation matrix with the number of bands chosen using an exact multiple hypothesis testing procedure. This approach is considerably faster than many existing methods and only requires inversion of \((k + 1)\)-dimensional covariance matrices. The resulting estimator is positive definite as long as \(k < n\) (where \(p\) can be larger than \(n\)). We make connections between this approach and banding the Cholesky factor of the modified Cholesky decomposition of the inverse covariance matrix (Wu and Pourahmadi, 2003) and show that the maximum likelihood estimator of the \(k\)-band partial autocorrelation matrix is the same as the \(k\)-band inverse Cholesky factor. We evaluate our estimator via extensive simulations and illustrate the approach using high-dimensional sonar data.


62H12 Estimation in multivariate analysis
62H15 Hypothesis testing in multivariate analysis
62H20 Measures of association (correlation, canonical correlation, etc.)


Full Text: DOI


[1] Bickel, Peter J.; Levina, Elizaveta, Regularized estimation of large covariance matrices, Ann. Statist., 36, 1, 199-227, (2008) · Zbl 1132.62040
[2] Daniels, Michael J.; Pourahmadi, Mohsen, Modeling covariance matrices via partial autocorrelations, J. Multivariate Anal., 100, 2352-2363, (2009) · Zbl 1175.62090
[3] Fan, Jianqing; Fan, Yingying; Lv, Jinchi, High dimensional covariance matrix estimation using a factor model, J. Econometrics, 147, 1, 186-197, (2008) · Zbl 1429.62185
[4] Friedman, Jerome H., Regularized discriminant analysis, J. Amer. Statist. Assoc., 84, 405, 165-175, (1989)
[5] Furrer, Reinhard; Bengtsson, Thomas, Estimation of high-dimensional prior and posterior covariance matrices in Kalman filter variants, J. Multivariate Anal., 98, 227-255, (2007) · Zbl 1105.62091
[6] Gabriel, K. R., Ante-dependence analysis of an ordered set of variables, Ann. Math. Stat., 33, 1, 201-212, (1962) · Zbl 0111.15604
[7] Joe, Harry, Generating random correlation matrices based on partial correlations, J. Multivariate Anal., 97, 2177-2189, (2006) · Zbl 1112.62055
[8] Iain M. Johnstone, Arthur Yu Lu, Sparse principal components analysis, 2009. Arxiv Preprint arXiv:0901.4392. · Zbl 1388.62174
[9] Pourahmadi, Mohsen, Joint mean-covariance models with applications to longitudinal data: unconstrained parameterisation, Biometrika, 86, 3, 677-690, (1999) · Zbl 0949.62066
[10] Pourahmadi, Mohsen, Covariance estimation: the GLM and regularization perspectives, Statist. Sci., 26, 3, 369-387, (2011) · Zbl 1246.62139
[11] Rothman, Adam J.; Levina, Elizaveta; Zhu, Ji, A new approach to Cholesky-based covariance regularization in high dimensions, Biometrika, 97, 3, 539-550, (2010) · Zbl 1195.62089
[12] Warton, David I., Penalized normal likelihood and ridge regularization of correlation and covariance matrices, J. Amer. Statist. Assoc., 103, 481, 340-349, (2008) · Zbl 1471.62362
[13] Witten, Daniela M.; Tibshirani, Robert, Covariance-regularized regression and classification for high dimensional problems, J. R. Stat. Soc. Ser. B, 71, 3, 615-636, (2009) · Zbl 1250.62033
[14] Wu, Wei B.; Pourahmadi, Mohsen, Nonparameteric estimation of large covariance matrices of longitudinal data, Biometrika, 90, 4, 831-844, (2003) · Zbl 1436.62347
[15] Zimmerman, Dale L.; Nunez-Anton, Vicente, Antedependence models for longitudinal data, vol. 11, (2010), CRC Press Boca Raton, Florida · Zbl 0897.62077
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.