# zbMATH — the first resource for mathematics

On optimality of Bayesian testimation in the normal means problem. (English) Zbl 1126.62003
Summary: We consider the problem of recovering a high-dimensional vector $$\mu$$ observed in white noise, where the unknown vector $$\mu$$ is assumed to be sparse. The objective of the paper is to develop a Bayesian formalism which gives rise to a family of $$l_{0}$$-type penalties. The penalties are associated with various choices of the prior distributions $$\pi_n(\cdot )$$ on the number of nonzero entries of $$\mu$$ and, hence, are easy to interpret. The resulting Bayesian estimators lead to a general thresholding rule which accommodates many of the known thresholding and model selection procedures as particular cases corresponding to specific choices of $$\pi_n(\cdot )$$. Furthermore, they achieve optimality in a rather general setting under very mild conditions on the prior. We also specify the class of priors $$\pi_n(\cdot )$$ for which the resulting estimator is adaptively optimal (in the minimax sense) for a wide range of sparse sequences and consider several examples of such priors.

##### MSC:
 62C10 Bayesian problems; characterization of Bayes procedures 62F03 Parametric hypothesis testing 62C20 Minimax procedures in statistical decision theory 62G05 Nonparametric estimation 62F10 Point estimation
EBayesThresh
Full Text:
##### References:
 [1] Abramovich, F. and Angelini, C. (2006). Bayesian maximum a posteriori multiple testing procedure. Sankhyā 68 436–460. · Zbl 1193.62031 [2] Abramovich, F. and Benjamini, Y. (1995). Thresholding of wavelet coefficients as a multiple hypotheses testing procedure. In Wavelets and Statistics . Lecture Notes in Statist. 103 5–14. Springer, New York. · Zbl 0875.62081 [3] Abramovich, F. and Benjamini, Y. (1996). Adaptive thresholding of wavelet coefficients. Comput. Statist. Data Anal. 22 351–361. [4] Abramovich, F., Benjamini, Y., Donoho, D. L. and Johnstone, I. M. (2006). Adapting to unknown sparsity by controlling the false discovery rate. Ann. Statist. 34 584–653. · Zbl 1092.62005 [5] Akaike, H. (1973). Information theory and an extension of the maximum likelihood principle. In Second International Symposium on Information Theory (B. N. Petrov and F. Csáki, eds.) 267–281. Akadémiai Kiadó, Budapest. · Zbl 0283.62006 [6] Antoniadis, A. and Fan, J. (2001). Regularization of wavelet approximations (with discussion). J. Amer. Statist. Assoc. 96 939–967. JSTOR: · Zbl 1072.62561 [7] Benjamini, Y. and Hochberg, Y. (1995). Controlling the false discovery rate: A practical and powerful approach to multiple testing. J. Roy. Statist. Soc. Ser. B 57 289–300. JSTOR: · Zbl 0809.62014 [8] Birgé, L. and Massart, P. (2001). Gaussian model selection. J. Eur. Math. Soc. 3 203–268. · Zbl 1037.62001 [9] Donoho, D. L. and Johnstone, I. M. (1994). Ideal spatial adaptation via wavelet shrinkage. Biometrika 81 425–455. JSTOR: · Zbl 0815.62019 [10] Donoho, D. L. and Johnstone, I. M. (1994). Minimax risk over $$\ell_p$$-balls for $$\ell_q$$-error. Probab. Theory Related Fields 99 277–303. · Zbl 0802.62006 [11] Donoho, D. L. and Johnstone, I. M. (1996). Neo-classical minimax problems, thresholding and adaptive function estimation. Bernoulli 2 39–62. · Zbl 0877.62035 [12] Donoho, D. L., Johnstone, I. M., Hoch, J. C. and Stern, A. S. (1992). Maximum entropy and the nearly black object (with discussion). J. Roy. Statist. Soc. Ser. B 54 41–81. JSTOR: · Zbl 0788.62103 [13] Fan, J. and Li, R. (2001). Variable selection via nonconcave penalized likelihood and its oracle properties. J. Amer. Statist. Assoc. 96 1348–1360. JSTOR: · Zbl 1073.62547 [14] Foster, D. and George, E. (1994). The risk inflation criterion for multiple regression. Ann. Statist. 22 1947–1975. · Zbl 0829.62066 [15] Foster, D. and Stine, R. (1999). Local asymptotic coding and the minimum description length. IEEE Trans. Inform. Theory 45 1289–1293. · Zbl 0959.62006 [16] Frank, I. E. and Friedman, J. H. (1993). A statistical view of some chemometrics regression tools (with discussion). Technometrics 35 109–148. · Zbl 0775.62288 [17] Hochberg, Y. (1988). A sharper Bonferroni procedure for multiple tests of significance. Biometrika 75 800–802. JSTOR: · Zbl 0661.62067 [18] Holm, S. (1979). A simple sequentially rejective multiple test procedure. Scand. J. Statist. 6 65–70. · Zbl 0402.62058 [19] Hunter, D. R. and Li, R. (2005). Variable selection using MM algorithms. Ann. Statist. 33 1617–1642. · Zbl 1078.62028 [20] Johnstone, I. M. (1994). Minimax Bayes, asymptotic minimax and sparse wavelet priors. In Statistical Decision Theory and Related Topics V (S. Gupta and J. Berger, eds.) 303–326. Springer, New York. · Zbl 0815.62017 [21] Johnstone, I. M. (2002). Function estimation and Gaussian sequence models. Unpublished manuscript. · Zbl 1037.91527 [22] Johnstone, I. M. and Silverman, B. W. (2004). Needles and straw in haystacks: Empirical Bayes estimates of possibly sparse sequences. Ann. Statist. 32 1594–1649. · Zbl 1047.62008 [23] Sarkar, S. K. (2002). Some results on false discovery rate in stepwise multiple testing procedures. Ann. Statist. 30 239–257. · Zbl 1101.62349 [24] Schwarz, G. (1978). Estimating the dimension of a model. Ann. Statist. 6 461–464. · Zbl 0379.62005 [25] Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B 58 267–288. JSTOR: · Zbl 0850.62538 [26] Tibshirani, R. and Knight, K. (1999). The covariance inflation criterion for adaptive model selection. J. R. Stat. Soc. Ser. B Stat. Methodol. 61 529–546. JSTOR: · Zbl 0924.62031
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.