×

Risk bounds for mixture density estimation. (English) Zbl 1141.62024

Summary: We focus on the problem of estimating a bounded density using a finite combination of densities from a given class. We consider the maximum likelihood estimator (MLE) and the greedy procedure described by J. Q. Li and A. R. Barron [Mixture density estimation. S. A. Soller et al. (eds.), Advances Neural Inf. Process. Syst. 12, MIT Press, Cambridge, Massachusetts, 279–285 (2000)] under the additional assumption of boundedness of the densities. We prove an \(O(n^{-1/2})\) bound on the estimation error which does not depend on the number of densities in the estimated combination. Under the boundedness assumption, this improves the bound of Li and Barron by removing the \(\log n\) factor and also generalizes it to the base classes with converging Dudley integral.

MSC:

62G07 Density estimation
62G05 Nonparametric estimation
62G20 Asymptotic properties of nonparametric inference
PDFBibTeX XMLCite
Full Text: DOI Numdam EuDML

References:

[1] A.R. Barron , Universal approximation bounds for superpositions of a sigmoidal function . IEEE Trans. Inform. Theory 39 ( 1993 ) 930 - 945 . Zbl 0818.68126 · Zbl 0818.68126 · doi:10.1109/18.256500
[2] A.R. Barron , Approximation and estimation bounds for artificial neural networks . Machine Learning 14 ( 1994 ) 115 - 133 . Zbl 0818.68127 · Zbl 0818.68127
[3] L. Birgé and P. Massart , Rates of convergence for minimum contrast estimators . Probab. Theory Related Fields 97 ( 1993 ) 113 - 150 . Zbl 0805.62037 · Zbl 0805.62037 · doi:10.1007/BF01199316
[4] R.M. Dudley , Uniform Central Limit Theorems . Cambridge University Press ( 1999 ). MR 1720712 | Zbl 0951.60033 · Zbl 0951.60033 · doi:10.1017/CBO9780511665622
[5] L.K. Jones , A simple lemma on greedy approximation in Hilbert space and convergence rates for Projection Pursuit Regression and neural network training . Ann. Stat. 20 ( 1992 ) 608 - 613 . Article | Zbl 0746.62060 · Zbl 0746.62060 · doi:10.1214/aos/1176348546
[6] M. Ledoux and M. Talagrand , Probability in Banach Spaces . Springer-Verlag, New York ( 1991 ). MR 1102015 | Zbl 0748.60004 · Zbl 0748.60004
[7] J. Li and A. Barron , Mixture density estimation , in Advances in Neural information processings systems 12, S.A. Solla, T.K. Leen and K.-R. Muller Ed. San Mateo, CA. Morgan Kaufmann Publishers ( 1999 ).
[8] J. Li , Estimation of Mixture Models . Ph.D. Thesis, The Department of Statistics. Yale University ( 1999 ).
[9] C. McDiarmid , On the method of bounded differences . Surveys in Combinatorics ( 1989 ) 148 - 188 . Zbl 0712.05012 · Zbl 0712.05012
[10] S. Mendelson , On the size of convex hulls of small sets . J. Machine Learning Research 2 ( 2001 ) 1 - 18 . Zbl 1008.68107 · Zbl 1008.68107 · doi:10.1162/153244302760185225
[11] P. Niyogi and F. Girosi , Generalization bounds for function approximation from scattered noisy data . Adv. Comput. Math. 10 ( 1999 ) 51 - 80 . Zbl 1053.65506 · Zbl 1053.65506 · doi:10.1023/A:1018966213079
[12] S.A. van de Geer , Rates of convergence for the maximum likelihood estimator in mixture models . Nonparametric Statistics 6 ( 1996 ) 293 - 310 . Zbl 0872.62039 · Zbl 0872.62039 · doi:10.1080/10485259608832677
[13] S.A. van de Geer , Empirical Processes in M-Estimation . Cambridge University Press ( 2000 ). · Zbl 1179.62073
[14] A.W. van der Vaart and J.A. Wellner , Weak Convergence and Empirical Processes with Applications to Statistics . Springer-Verlag, New York ( 1996 ). MR 1385671 | Zbl 0862.60002 · Zbl 0862.60002
[15] W.H. Wong and X. Shen , Probability inequalities for likelihood ratios and convergence rates for sieve mles . Ann. Stat. 23 ( 1995 ) 339 - 362 . Article | Zbl 0829.62002 · Zbl 0829.62002 · doi:10.1214/aos/1176324524
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.