Choosing starting values for the EM algorithm for getting the highest likelihood in multivariate Gaussian mixture models. (English) Zbl 1429.62235

Summary: Simple methods to choose sensible starting values for the EM algorithm to get maximum likelihood parameter estimation in mixture models are compared. They are based on random initialization, using a classification EM algorithm (CEM), a Stochastic EM algorithm (SEM) or previous short runs of EM itself. Those initializations are included in a search/run/select strategy which can be compounded by repeating the three steps. They are compared in the context of multivariate Gaussian mixtures on the basis of numerical experiments on both simulated and real data sets in a target number of iterations. The main conclusions of those numerical experiments are the following. The simple random initialization which is probably the most employed way of initiating EM is often outperformed by strategies using CEM, SEM or shorts runs of EM before running EM. Also, it appears that compounding is generally profitable since using a single run of EM can often lead to suboptimal solutions. Otherwise, none of the experimental strategies can be regarded as the best one and it is difficult to characterize situations where a particular strategy can be expected to outperform the other ones. However, the strategy initiating EM with short runs of EM can be recommended. This strategy, which as far as we know was not used before the present study, has some advantages. It is simple, performs well in a lot of situations presupposing no particular form of the mixture to be fitted to the data and seems little sensitive to noisy data.


62H30 Classification and discrimination; cluster analysis (statistical aspects)


Full Text: DOI


[1] Akaike, H., A new look at the statistical identification model, IEEE trans. automat. control, 19, 716-723, (1974) · Zbl 0314.62039
[2] Böhning, D., Computer-assisted analysis of mixtures and applications: meta-analysis, disease mapping and others, (1999), Chapman & Hall New York
[3] Celeux, G.; Govaert, G., A classification EM algorithm and two stochastic versions, Comput. stat. data anal., 14, 315-332, (1992) · Zbl 0937.62605
[4] Celeux, G.; Chrétien, S.; Forbes, F.; Mkhadri, A., A component-wise EM algorithm for mixtures, J. comput. graph. stat., 10, 699-712, (2001)
[5] Dasgupta, A.; Raftery, A.E., Detecting features in spatial point process with clutter viamodel-based clustering, J. am. stat. assoc., 93, 294-302, (1998) · Zbl 0906.62105
[6] Dempster, A.P.; Laird, N.M.; Rubin, D.B., Maximum likelihood from incomplete data via the EM algorithm, J. roy stat. soc. B, 39, 1-38, (1977), with discussion · Zbl 0364.62022
[7] Lindsay, B.G., 1995. Mixture Models: Theory, Geometry and Applications. NSF-CBMS Regional Conference Series in Probability and Statistics, Vol. 5. Institute of Mathematical Statistics, California. · Zbl 1163.62326
[8] Liu, C., Sun, D.X., 1997. Acceleration of EM algorithm for mixtures models using ECME. ASA Proceedings of The Stat. Comp. Session, pp. 109-114.
[9] MacQueen, J., Some methods for classification and analysis of multivariate observations, (), 281-297 · Zbl 0214.46201
[10] McLachlan, G.J.; Krishnam, T., The EM algorithm and extensions, (1997), Wiley New York
[11] McLachlan, G.J.; Peel, D., Finite mixture models, (2000), Wiley New York · Zbl 0963.62061
[12] Meila, M.; Heckerman, D., An experimental comparison of model-based clustering methods, Mach. learning, 42, 9-29, (2001) · Zbl 0970.68075
[13] Pilla, R.S.; Lindsay, B.G., Alternative EM methods for nonparametric finite mixture models, Biometrika, 88, 535-550, (2001) · Zbl 0984.62024
[14] Sandor, G., Sémiologie biologique des protéines Sériques, (1976), Maloine Paris
[15] Soubiran, C., Kinematics of the Galaxy’s steller population from a proper motion survey, Astronom. astrophys., 274, 181-188, (1993)
[16] Ueda, N.; Nakano, R., Deterministic annealing EM algorithm, Neural networks, 11, 271-282, (1998)
[17] Venables, W.N.; Ripley, B.D., Modern applied statistics with S-plus, (1994), Springer New York · Zbl 0806.62002
[18] Wei, G.C.G.; Tanner, M.A., A Monte Carlo implementation of the EM algorithm and the poor Man’s data augmentation algorithms, J. am. stat. assoc., 85, 699-704, (1990)
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.