Recent zbMATH articles in MSC 62C20https://www.zbmath.org/atom/cc/62C202021-07-10T17:08:46.445117ZWerkzeugIsotonic regression in multi-dimensional spaces and graphshttps://www.zbmath.org/1462.622082021-07-10T17:08:46.445117Z"Deng, Hang"https://www.zbmath.org/authors/?q=ai:deng.hang"Zhang, Cun-Hui"https://www.zbmath.org/authors/?q=ai:zhang.cun-hui|zhang.cun-hui.1This work considers the minimax and adaptation rates in the multivariate isotonic regression setting. The minimax theorem is used as the theoretical foundation to build a case for developing estimators which lie in-between the max-min and min-max estimators over possibly smaller classes of upper and lower sets, including a subclass of block estimators. Further, based on appropriate moment conditions on the noise, the research proceeds by providing corresponding risk bounds for such general estimators for isotonic regression on graphs. The asymptotic properties of these estimates are studied in detail and in particular the block estimate is shown to possess the so-called oracle property in variable selection.Partially adaptive nonparametric instrumental regressionhttps://www.zbmath.org/1462.622402021-07-10T17:08:46.445117Z"Johannes, Jan"https://www.zbmath.org/authors/?q=ai:johannes.jan"Schwarz, Maik"https://www.zbmath.org/authors/?q=ai:schwarz.maikSummary: We consider the problem of estimating the structural function in nonparametric instrumental regression, where in the presence of an instrument \(W\), a response \(Y\) is modeled in dependence of an endogenous explanatory variable \(Z\). The proposed estimator is based on dimension reduction and additional thresholding. The minimax optimal rate of convergence of the estimator is derived assuming that the structural function belongs to some ellipsoids which are in a certain sense linked to the conditional expectation of \(Z\) given \(W\). We illustrate these results by considering classical smoothness assumptions. However, the proposed estimator requires an optimal choice of a dimension parameter depending on certain characteristics of the unknown structural function and the conditional expectation of \(Z\) given \(W\), which are not known in practice. The main issue addressed in our work is an adaptive choice of this dimension parameter using a model selection approach under the restriction that the conditional expectation of \(Z\) given \(W\) is smooth in a certain sense. In this situation, we develop a penalized minimum contrast estimator with randomized penalty and collection of models. We show that this data-driven estimator can attain the lower risk bound up to a constant over a wide range of smoothness classes for the structural function.High-probability minimax probability machineshttps://www.zbmath.org/1462.623802021-07-10T17:08:46.445117Z"Cousins, Simon"https://www.zbmath.org/authors/?q=ai:cousins.simon"Shawe-Taylor, John"https://www.zbmath.org/authors/?q=ai:shawe-taylor.johnSummary: In this paper we focus on constructing binary classifiers that are built on the premise of minimising an upper bound on their future misclassification rate. We pay particular attention to the approach taken by the minimax probability machine [\textit{G. R. G. Lanckriet} et al., J. Mach. Learn. Res. 3, No. 3, 555--582 (2003; Zbl 1084.68657)], which directly minimises an upper bound on the future misclassification rate in a worst-case setting: that is, under all possible choices of class-conditional distributions with a given mean and covariance matrix. The validity of these bounds rests on the assumption that the means and covariance matrices are known in advance, however this is not always the case in practice and their empirical counterparts have to be used instead. This can result in erroneous upper bounds on the future misclassification rate and lead to the formulation of sub-optimal predictors. In this paper we address this oversight and study the influence that uncertainty in the moments, the mean and covariance matrix, has on the construction of predictors under the minimax principle. By using high-probability upper bounds on the deviation between true moments and their empirical counterparts, we can re-formulate the minimax optimisation to incorporate this uncertainty and find the predictor that minimises the \textit{high-probability}, worst-case misclassification rate. The moment uncertainty introduces a natural regularisation component into the optimisation, where each class is regularised in proportion to the degree of moment uncertainty. Experimental results would support the view that in the case of with limited data availability, the incorporation of moment uncertainty can lead to the formation of better predictors.Bump detection in the presence of dependency: does it ease or does it load?https://www.zbmath.org/1462.625162021-07-10T17:08:46.445117Z"Enikeeva, Farida"https://www.zbmath.org/authors/?q=ai:enikeeva.farida-n"Munk, Axel"https://www.zbmath.org/authors/?q=ai:munk.axel"Pohlmann, Markus"https://www.zbmath.org/authors/?q=ai:pohlmann.markus"Werner, Frank"https://www.zbmath.org/authors/?q=ai:werner.frank.1The problem of bump detection in the presence of dependency is analyzed in this paper. Observations of a triangular array of Gaussian vectors \(Y=\mu_n+\xi_n\) are considered, with known positive definite covariance matrix \(\Sigma_n\), but an unknown mean vector \(\mu_n\) and the noise \(\xi_n\) consists of \(n\) consecutive samples of a stationary process \((Z_t)_{t\in\mathbb Z}\). The goal of the paper is to analyze how difficult is to detect abrupt changes based on observation \(Y\) when \(\mu_n\) is obtained by an equidistantly sampling. Also asymptotic lower and upper bounds are provided for the amplitude of detectable signals. The study is organized in five main sections, the first one containing the model and problem statement, the terminology necessary for results and presentation of some related works. The main results obtained under some assumptions are presented in the second section, obtaining asymptotic minimax detection boundary (Theorem 2.1) and non-asymptotic results in the case of a seemingly simpler testing problem with possible bumps that belong to a set of non-overlapping intervals. Applying Theorem 2.1 to ARMA\((p,q)\) processes asymptotic detection boundary is obtained. Also non-asymptotic results for AR\((p)\) noise are obtained in the third section. In the fourth section of the paper numerical simulations are given to examine the finite sample accuracy of the asymptotic upper bounds for the detection boundary. Based on laws of large numbers, the proofs of the main results are given in the fifth section of the paper. Also it is mentioned the supplementary material (\url{doi:10.3150/20-BEJ1226SUPP.pdf}) containing several results that are necessary for this study.