Meng, Xiao-Li; Rubin, Donald B. Maximum likelihood estimation via the ECM algorithm: A general framework. (English) Zbl 0778.62022 Biometrika 80, No. 2, 267-278 (1993). Summary: Two major reasons for the popularity of the EM algorithm are that its maximum step involves only complete-data maximum likelihood estimation, which is often computationally simple, and that its convergence is stable, with each iteration increasing the likelihood. When the associated complete-data maximum likelihood estimation itself is complicated, EM is less attractive because the \(M\)-step is computationally unattractive. In many cases, however, complete-data maximum likelihood estimation is relatively simple when conditional on some function of the parameters being estimated.We introduce a class of generalized EM algorithms, which we call the ECM algorithm, for Expectation/Conditional Maximization (CM), that takes advantage of the simplicity of complete-data conditional maximum likelihood estimation by replacing a complicated \(M\)-step of EM with several computationally simpler CM-steps. We show that the ECM algorithm shares all the appealing convergence properties of EM, such as always increasing the likelihood, and present several illustrative examples. Cited in 5 ReviewsCited in 359 Documents MSC: 62F10 Point estimation 65C99 Probabilistic methods, stochastic differential equations Keywords:conditional maximization; constrained optimization; Gibbs sampler; incomplete data; iterated conditional modes; iterative proportional fitting; missing data; loglinear model; contingency tables; regression model; complete-data maximum likelihood estimation; generalized EM algorithms; ECM algorithm; conditional maximum likelihood estimation PDF BibTeX XML Cite \textit{X.-L. Meng} and \textit{D. B. Rubin}, Biometrika 80, No. 2, 267--278 (1993; Zbl 0778.62022) Full Text: DOI Link OpenURL