Reconstruction of sparse vectors in white Gaussian noise.

*(English. Russian original)*Zbl 1024.62003
Probl. Inf. Transm. 38, No. 1, 65-79 (2002); translation from Probl. Peredachi Inf. 38, No. 1, 75-91 (2002).

From the paper: Assume that a stochastic vector \(x\in\mathbb{R}^n\) with components
\[
x_i=\theta_i +\xi_i, \quad i=1, \dots,n,
\]
is observed, where \(\xi_i\) are independent \((0,1)\) Gaussian variables. The problem is to reconstruct the vector \(\theta= (\theta_1,\dots, \theta_n)^T\) from observations \({\mathbf x}= (x_1,\dots, x_n)^T\). An important component of any statistical problem is the a priori information on the vector estimated. We consider the problem of estimation of a sparse vector. This means that, first, the dimension \(n\) of the problem is sufficiently large and, second, the number of large components of the estimated vector is relatively small in comparison with the dimension of the problem and the other components are sufficiently small. Essential is the fact that positions of large components in the estimated vector can be absolutely arbitrary and, of course, are unknown.

Two approaches to statistical estimation in this case are discussed, namely, the model selection method and threshold estimators. We propose a method of selecting a threshold estimator based on the principle of empirical complexity minimization with minimal conservative penalization.

Two approaches to statistical estimation in this case are discussed, namely, the model selection method and threshold estimators. We propose a method of selecting a threshold estimator based on the principle of empirical complexity minimization with minimal conservative penalization.