Nonconcave penalized M-estimation with a diverging number of parameters.

*(English)*Zbl 1206.62036Summary: M-estimation is a widely used technique for robust statistical inference. We investigate the asymptotic properties of a nonconcave penalized M-estimator in sparse, high-dimensional, linear regression models. Compared with classic M-estimation, the nonconcave penalized M-estimation method can perform parameter estimation and variable selection simultaneously. The proposed method is resistant to heavy-tailed errors or outliers in the response. We show that, under certain appropriate conditions, the nonconcave penalized M-estimator has the so-called “Oracle Property”; it is able to select variables consistently, and the estimators of nonzero coefficients have the same asymptotic distribution as they would if the zero coefficients were known in advance. We obtain consistency and asymptotic normality of the estimators when the dimension \(p_n\) of the predictors satisfies the conditions \(p_n\log n/n\rightarrow 0\) and \(p_{n}^{2}/n\rightarrow 0\), respectively, where \(n\) is the sample size. Based on the idea of sure independence screening (SIS) and rank correlation, a robust rank SIS (RSIS) is introduced to deal with ultra-high dimensional data. Simulation studies were carried out to assess the performance of the proposed method for finite-sample cases, and a dataset was analyzed for illustration.

##### MSC:

62F12 | Asymptotic properties of parametric estimators |

62F35 | Robustness and adaptive procedures (parametric inference) |

62J05 | Linear regression; mixed models |

65C60 | Computational problems in statistics (MSC2010) |