×

Adaptive robust estimation in sparse vector model. (English) Zbl 1476.62063

This paper is a study on construction of estimators that are optimally adaptive in a minimax sense with respect to the noise level, to the form of the noise distribution, and to the sparsity. For the sparse vector model, the authors consider estimation of the target vector, of its l2-norm and of the noise variance. The noise random variables are assumed to be i.i.d.. The authors consider classes of noise distributions with polynomially and exponentially decreasing tails as well as the case of Gaussian noise. They obtained rates turn out to be different from the minimax nonadaptive rates when the triplet is known. They study estimation of sparse vector in l2-norm when the variance of noise and the distribution of random variables are both unknown. They only assume that the noise distribution belongs to a given class, which can be either a class of distributions with polynomial tails, or a class with exponential decay of the tails. They show the estimation of the l2-norm, the estimation of the variance of the noise. They also show the proofs of the upper bounds and the lower bounds.

MSC:

62G05 Nonparametric estimation
62G35 Nonparametric robustness
62R10 Functional data analysis
62C20 Minimax procedures in statistical decision theory
62J05 Linear regression; mixed models

Software:

EigenPrism
PDFBibTeX XMLCite
Full Text: DOI arXiv

References:

[1] Bellec, P. C., Lecué, G. and Tsybakov, A. B. (2018). Slope meets Lasso: Improved oracle bounds and optimality. Ann. Statist. 46 3603-3642. · Zbl 1405.62056 · doi:10.1214/17-AOS1670
[2] Belloni, A., Chernozhukov, V. and Wang, L. (2014). Pivotal estimation via square-root Lasso in nonparametric regression. Ann. Statist. 42 757-788. · Zbl 1321.62030 · doi:10.1214/14-AOS1204
[3] Belomestny, D., Trabs, M. and Tsybakov, A. B. (2019). Sparse covariance matrix estimation in high-dimensional deconvolution. Bernoulli 25 1901-1938. · Zbl 1466.62332 · doi:10.3150/18-BEJ1040A
[4] Butucea, C. and Matias, C. (2005). Minimax estimation of the noise level and of the deconvolution density in a semiparametric convolution model. Bernoulli 11 309-340. · Zbl 1063.62044 · doi:10.3150/bj/1116340297
[5] Cai, T. T. and Jin, J. (2010). Optimal rates of convergence for estimating the null density and proportion of nonnull effects in large-scale multiple testing. Ann. Statist. 38 100-145. · Zbl 1181.62040 · doi:10.1214/09-AOS696
[6] Carpentier, A., Collier, O., Comminges, L., Tsybakov, A. B. and Wang, Y. (2019). Minimax rate of testing in sparse linear regression. Autom. Remote Control 80 1817-1834. · Zbl 1456.62083
[7] Carpentier, A. and Verzelen, N. (2019). Adaptive estimation of the sparsity in the Gaussian vector model. Ann. Statist. 47 93-126. · Zbl 1417.62113 · doi:10.1214/17-AOS1680
[8] Chen, M., Gao, C. and Ren, Z. (2018). Robust covariance and scatter matrix estimation under Huber’s contamination model. Ann. Statist. 46 1932-1960. · Zbl 1408.62104 · doi:10.1214/17-AOS1607
[9] Collier, O., Comminges, L. and Tsybakov, A. B. (2017). Minimax estimation of linear and quadratic functionals on sparsity classes. Ann. Statist. 45 923-958. · Zbl 1368.62191 · doi:10.1214/15-AOS1432
[10] Collier, O., Comminges, L., Tsybakov, A. B. and Verzelen, N. (2018). Optimal adaptive estimation of linear functionals under sparsity. Ann. Statist. 46 3130-3150. · Zbl 1456.62141 · doi:10.1214/17-AOS1653
[11] Comminges, L., Collier, O., Ndaoud, M. and Tsybakov, A. (2021). Supplement to “Adaptive robust estimation in sparse vector model.” https://doi.org/10.1214/20-AOS2002SUPP
[12] Donoho, D. L., Johnstone, I. M., Hoch, J. C. and Stern, A. S. (1992). Maximum entropy and the nearly black object. J. Roy. Statist. Soc. Ser. B 54 41-81. · Zbl 0788.62103
[13] Gautier, E. and Tsybakov, A. B. (2013). Pivotal estimation in high-dimensional regression via linear programming. In Empirical Inference 195-204. Springer, Heidelberg. · Zbl 1325.62088 · doi:10.1007/978-3-642-41136-6_17
[14] Golubev, G. K. and Krymova, E. A. (2018). Noise level estimation in high-dimensional linear models. Probl. Inf. Transm. 54 351-371. · Zbl 1428.62317
[15] Guo, Z., Wang, W., Cai, T. T. and Li, H. (2019). Optimal estimation of genetic relatedness in high-dimensional linear models. J. Amer. Statist. Assoc. 114 358-369. · Zbl 1478.62193 · doi:10.1080/01621459.2017.1407774
[16] Huber, P. J. (1981). Robust Statistics. Wiley Series in Probability and Mathematical Statistics. Wiley, New York. · Zbl 0536.62025
[17] Ibragimov, I. A. and Hasminskii, R. Z. (1981). Statistical Estimation: Asymptotic Theory. Applications of Mathematics 16. Springer, New York-Berlin.
[18] Janson, L., Foygel Barber, R. and Candès, E. (2017). EigenPrism: Inference for high dimensional signal-to-noise ratios. J. R. Stat. Soc. Ser. B. Stat. Methodol. 79 1037-1065. · Zbl 1373.62355 · doi:10.1111/rssb.12203
[19] Minsker, S. and Wei, X. (2017). Estimation of the covariance structure of heavy-tailed distributions. In Advances in Neural Information Processing Systems (I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan and R. Garnett, eds.) 30 2859-2868. Curran Associates, Red Hook, NY.
[20] Petrov, V. V. (1995). Limit Theorems of Probability Theory: Sequences of Independent Random Variables. Oxford Studies in Probability 4. Clarendon Press, Oxford. · Zbl 0826.60001
[21] Shorack, G. R. and Wellner, J. A. (1986). Empirical Processes with Applications to Statistics. Wiley Series in Probability and Mathematical Statistics: Probability and Mathematical Statistics. Wiley, New York. · Zbl 1170.62365
[22] Sun, T. and Zhang, C.-H. (2012). Scaled sparse linear regression. Biometrika 99 879-898. · Zbl 1452.62515
[23] Tsybakov, A. B. (2009). Introduction to Nonparametric Estimation. Springer Series in Statistics. Springer, New York. · Zbl 1176.62032 · doi:10.1007/b13794
[24] Verzelen, N. (2012). Minimax risks for sparse regressions: Ultra-high dimensional phenomenons. Electron. J. Stat. 6 38-90. · Zbl 1334.62120 · doi:10.1214/12-EJS666
[25] Verzelen, N. and Gassiat, E. (2018). Adaptive estimation of high-dimensional signal-to-noise ratios. Bernoulli 24 3683-3710. · Zbl 1415.62034 · doi:10.3150/17-BEJ975
[26] Wasserman, L. (2005). All of Statistics: A Concise Course in Statistical Inference. Springer Texts in Statistics. Springer, New York · doi:10.1007/978-0-387-21736-9
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.