×

Accurate tree-based missing data imputation and data fusion within the statistical learning paradigm. (English) Zbl 1360.62324

Summary: Framework of this paper is statistical data editing, specifically how to edit or impute missing or contradictory data and how to merge two independent data sets presenting some lack of information. Assuming a missing at random mechanism, this paper provides an accurate tree-based methodology for both missing data imputation and data fusion that is justified within the Statistical Learning Theory of Vapnik. It considers both an incremental variable imputation method to improve computational efficiency as well as boosted trees to gain in prediction accuracy with respect to other methods. As a result, the best approximation of the structural risk (also known as irreducible error) is reached, thus reducing at minimum the generalization (or prediction) error of imputation. Moreover, it is distribution free, it holds independently of the underlying probability law generating missing data values. Performance analysis is discussed considering simulation case studies and real world applications.

MSC:

62H30 Classification and discrimination; cluster analysis (statistical aspects)
65C60 Computational problems in statistics (MSC2010)
68T05 Learning and adaptive systems in artificial intelligence
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] ALUJA-BANET, T., MORINEAU A., and RIUS, R. (1997), ”La Greffe de Fichiers et Ses Conditions D’application. Méthode et Exemple”, in Enquêtes et Sondages, eds. G. Brossier G. and A.M. Dussaix, Paris: Dunod, pp. 94–102.
[2] ALUJA-BANET, T., RIUS, R., NONELL, R., and MARTÍNEZ-ABARCA, M.J. (1998), ”Data Fusion and File Grafting”, in Analyses Multidimensionelles Des Données (1st ed.), NGUS 97, eds. A. Morineau, and K. Fernández Aguirre, París: CISIA-CERESTA, pp. 7–14.
[3] ALUJA-BANET, T., DAUNIS-I-ESTADELLA, J., and PELLICER, D. (2007), ”GRAFT, a Complete System for Data Fusion”, Computational Statistics and Data Analysis 52, 635–649. · Zbl 1452.62011 · doi:10.1016/j.csda.2006.11.029
[4] BARCENA,M.J., and TUSELL, F. (1999), ”Enlace de Encuestas: Una PropuestaMetodológica y Aplicación a la Encuesta de Presupuestos de Tempo”, Qüestiio, 23(2), 297–320.
[5] BARCENA, M.J., and TUSELL, F. (2000), ”Tree-based Algorithms for Missing Data Imputation”, in Proceedings in Computational Statistics, COMPSTAT 2000, eds. J.G. Bethlehem, and P.G.M. van der Heijden, Heidelberg: Physica-Verlag, pp. 193–198.
[6] BREIMAN, L. (1996), ”Bagging Predictors”, Machine Learning, 26, 46–59. · Zbl 0858.68080
[7] BREIMAN, L. (1998), ”Arcing Classifiers”, The Annals of Statistics, 26(3), 801–849. · Zbl 0934.62064 · doi:10.1214/aos/1024691079
[8] BREIMAN, L., FRIEDMAN, J.H., OLSHEN, R.A., and STONE, C.J. (1984), Classification and Regression Trees, Belmont CA: Wadsworth International Group. · Zbl 0541.62042
[9] CAPPELLI, C.,MOLA, F., and SICILIANO, R. (2002), ”A Statistical Approach to Growing a Reliable Honest Tree”, Computational Statistics and Data Analysis, 38, 285–299. · Zbl 1079.62516 · doi:10.1016/S0167-9473(01)00044-5
[10] CHU, C.K., and CHENG, P.E. (1995), ”Nonparametric Regression Estimation With Missing Data”, Journal of Statistical Planning and Inference, 48, 85–99. · Zbl 0897.62038 · doi:10.1016/0378-3758(94)00151-K
[11] CONTI, P.L., MARELLA, D., and SCANU, M. (2008), ”Evaluation of Matching Noise for Imputation Techniques Based on Nonparametric Local Linear Regression Estimators”, Computational Statistics and Data Analysis, 43, 354–365. · Zbl 1231.62062 · doi:10.1016/j.csda.2008.07.041
[12] CONVERSANO, C., and SICILIANO, R. (2008), ”Statistical Data Editing”, in: J. WANG. Data Warehousing And Mining: Concepts, Methodologies, Tools, And Applications (Vol. 4), ed. J. Wang, HERSHEY PA: Information Science Reference, pp. 1835–1840.
[13] CONVERSANO, C., and SICILIANO, R. (2009), ”Incremental Tree-Based Missing Data Imputation with Lexicographic Ordering”, Journal of Classification, 26(3), 361–379. · Zbl 1337.62128 · doi:10.1007/s00357-009-9038-8
[14] D’AMBROSIO, A., ARIA, M., and SICILIANO, R. (2007), ”Robust Tree-based Incremental Imputation Method for Data Fusion”, in LNCS 4273; Advances in Intelligent Data Analysis, Berlin/Heidelberg: Springer-Verlag, pp 174–183.
[15] DAVID, M.H., LITTLE, R.J.A., SAMUEL, M.E., and TRIEST, R.K. (1986), ”Alternative Methods for CPS Income Imputation”, Journal of American Statistical Association, 81, 29–41. · doi:10.1080/01621459.1986.10478235
[16] DEWAAL T., PANNEKOEK, J, and SCHOLTUS, S. (2011), ”Handbook of Statistical Data Editing and Imputation”, New York: Wiley.
[17] DEMPSTER, A.P., LAIRD, N.M., and RUBIN, D.B. (1977), ”Maximul Likelihood Estimation from Incomplete Data via the EM Algorithm (With Discussion)”, Journal of the Royal Statistical Society, Series B, 39, 1–38. · Zbl 0364.62022
[18] DIETTERICH, T.G. (2000), ”Ensemble Methods in Machine Learning”, in First International Workshop on Multiple Classifier Systems, eds. J. Kittler and F. Roli, Springer-Verlag, pp. 1-15.
[19] D’ORAZIO, M., DI ZIO, M., and SCANU, M. (2006), Statistical Matching: Theory and Practice, Chinchester: John Wiley & Sons. · Zbl 1107.62008
[20] EIBL, G., and PFEIFFER, K. P. (2002), ”How To Make AdaBoost.M1 Work for Weak Base Classifiers by Changing Only One Line of the Code”, in Machine Learning: ECML 2002, Lecture Notes in Artificial Intelligence, Heidelberg: Springer. · Zbl 1014.68933
[21] FELLEGI, I. P., and HOLT, D. (1976), ”A Systematic Approach To Automatic Edit and Imputation”, Journal of American Statistical Association, 71, 17–35. · doi:10.1080/01621459.1976.10481472
[22] FORD, B.N. (1983), ”An Overview of Hot Deck Procedures”, in Incomplete Data in Sample Surveys, Vol. II: Theory and Annotated Bibliography, eds. G. Madow, I. Olkin and D.B. Rubin, New York: Academic Press.
[23] FREUND, Y., and SCHAPIRE, R.E. (1997), ”A Decision-Theoretic Generalization of On-Line Learning and An Application To Boosting”, Journal of Computer and System Sciences, 55(1), 119–139. · Zbl 0880.68103 · doi:10.1006/jcss.1997.1504
[24] GEY, S., and POGGI, J.M. (2006), ”Boosting and Instability for Regression Trees”, Computational Statistics and Data Analysis, 50, 533–550. · Zbl 1431.62155 · doi:10.1016/j.csda.2004.09.001
[25] HASTIE, T.J., TIBSHIRANI, R.J., and FRIEDMAN, J.H. (2009), The Elements of Statistical Learning (2nd ed.), New York: Springer Verlag. · Zbl 1273.62005
[26] IBRAHIM, J.G. (1990), ”Incomplete Data in Generalized Linear Models”, Journal of the American Statistical Association, 85, 765–769. · doi:10.1080/01621459.1990.10474938
[27] IBRAHIM, J.G., LIPSITZ, S.R., and CHEN, M.H. (1999), ”Missing Covariates in Generalized Linear Models When the Missing Data Mechanism Is Non-Ignorable”, Journal of the Royal Statistical Society, Series B, 61(1), 173–190. · Zbl 0917.62060 · doi:10.1111/1467-9868.00170
[28] KOHAVI, R., and WOLPERT, D. (1996), ”Bias Plus Variance for Zero-One Loss Functions”, in Proceedings of the 13th International Machine Learning Conference, San Mateo CA: Morgan Kaufmann, pp. 275–283.
[29] KONG, E., and DIETTERICH, T.G. (1995), ”Error-Correcting Output Coding Correct Bias and Variance”, in The XII International Conference on Machine Learning, San Francisco CA: Morgan Kaufmann, pp. 313–321.
[30] LAKSHMINARAYAN, K., HARP, S.A., GOLDMAN R., and SAMAD, T. (1996), ”Imputation of Missing Data Using Machine Learning Techniques”, in Proceedings of the Second International Conference on Knowledge Discovery and Data Miming, eds. Simoudis, Han and Fayyad, Menlo Park CA: AAAI Press, pp. 140–145.
[31] LITTLE, J.R.A. (1992), ”Regression with Missing X’s: A Review”, Journal of the American Statistical Association, 87(420), 1227–1237.
[32] LITTLE, J.R.A., and RUBIN, D.B. (1987), Statistical Analysis with Missing Data, New York: John Wiley and Sons. · Zbl 0665.62004
[33] McKNIGHT, P.E., McKNIGHT, K.M., SIDANI, S., and FIGUEREDO, A.J. (2007), Missing Data: A Gentle Introduction, New York: The Guildford Press.
[34] MARELLA, D., SCANU, M., and CONTI, P.L. (2008), ”On the Matching Noise of Some Nonparametric Imputation Procedures”, Statistics & Probability Letters, 78(12), 1593–1600. · Zbl 1325.62092 · doi:10.1016/j.spl.2008.01.020
[35] MOLA, F., and SICILIANO, R. (1992), ”A Two-Stage Predictive Splitting Algorithm in Binary Segmentation”, in Computational Statistics: COMPSTAT 92, 1, eds. Y. Dodge, and J. Whittaker, Heiderlberg (D): Physica Verlag, pp. 179–184.
[36] MOLA, F., and SICILIANO, R. (1997), ”A Fast Splitting Procedure for Classification and Regression Trees”, Statistics and Computing, 7, 208–216. · doi:10.1023/A:1018590219790
[37] OUDSHOORN, C.G.M., VAN BUUREN, S., and VAN RIJCKEVORSEL, J.L.A. (1999), ”Flexible Multiple Imputation by Chained Equations of the AVO-95 Survey”, TNO Preventie en Gezondheid, TNO/PG 99.045.
[38] PAAS, G. (1985), ”Statistical Record Linkage Methodology, State of the Art and Future Prospects”, Bulletin of the International Statistical Institute, Proceedings of the 45th Session, LI, Book 2.
[39] PETRAKOS, G., CONVERSANO, C., FARMAKIS, G., MOLA, F., SICILIANO, R., and STAVROPOULOS, P. (2004), ”New Ways to Specify Data Edits”, Journal of Royal Statistical Society, Series A, 167(2), 249–274. · Zbl 1408.62023 · doi:10.1046/j.1467-985X.2003.00745.x
[40] RASSLER, S. (2002), Statistical Matching: A Frequentist Theory, Practical Applications and Alternative Bayesian Approaches, New York: Springer-Verlag.
[41] RASSLER, S. (2004), ”Data Fusion: Identification Problems, Validity, and Multiple Imputation”, Austrian Journal of Statistics, 33(1 & 2), 153–171.
[42] RUBIN, D.B. (1976), ”Inference and Missing Data (with Discussion)”, Biometrika, 63, 581–592. · Zbl 0344.62034 · doi:10.1093/biomet/63.3.581
[43] RUBIN, D.B. (1987), Multiple Imputation for Nonresponse in Surveys, New York: Wiley. · Zbl 1070.62007
[44] SANDE, I.G. (1983), ”Hot Deck Imputation Procedures”, in Incomplete Data in Sample Surveys, Vol. III. Symposium on Incomplete Data: Proceedings, New York: Academic Press.
[45] SAPORTA,G. (2002), ”Data Fusion and Data Grafting”, Computational Statistics and Data Analysis, 38, 465-473. · Zbl 1072.62502 · doi:10.1016/S0167-9473(01)00072-X
[46] SCHAPIRE, R.E., FREUND, Y., BARTLETT, P., and LEE, W.S. (1998), ”Boosting the Margin: A New Explanation for the Effectiveness of Voting Methods”, The Annals of Statistics, 26(5), 1651–1686. · Zbl 0929.62069 · doi:10.1214/aos/1024691352
[47] SICILIANO, R., and CONVERSANO, C. (2002), ”Tree-Based Classifiers for Conditional Missing Data Incremental Imputation”, Proceedings of the International Conference on Data Clean (Jyväskylä, May 29-31, 2002), University of Jyväskylä, Finland. · Zbl 1337.62128
[48] SICILIANO, R., and CONVERSANO, C. (2008), ”Decision Tree Induction”, in Data Warehousing And Mining: Concepts, Methodologies, Tools, And Applications (Vol. 2), ed. J. Wang, Hershey PA: Information Science Reference, pp. 624–629.
[49] SICILIANO, R., and MOLA, R. (1996), ”A Fast Regression Tree Procedure”, in Statistical Modelling, Proceedings of the 11th International Workshop on Statistical Modeling, eds. A. Forcina, G.M. Marchetti, R. Hatzinger, and G. Galmacci, Orvieto, 15-19 luglio, Graphos, Cittá di Castello, pp. 332–340.
[50] TIBSHIRANI, R. (1996), ”Bias, Variance and Prediction Error for Classification Rules”, Technical Report, University of Toronto, Department of Statistics.
[51] VAPNIK, V.N. (1995), The Nature of Statistical Learning Theory, New York: Springer Verlag. · Zbl 0833.62008
[52] VAPNIK, V.N. (1998), Statistical Learning Theory, New York: Wiley. · Zbl 0935.62007
[53] VAPNIK, V.N., and CHERVONENKIS, A.J. (1989), ”The Necessary and Sufficient Conditions for Consistency of the Method of Empirical Risk Minimization”, Pattern Recognition and Image Analysis, 284–305.
[54] VAN BUUREN, S., BRAND, JPL., GROOTHUIS-OUDSHOORN, C.G.M., and RUBIN, D.B. (2006), ”Fully Conditional Specification in Multivariate Imputation”, Journal of Statistical Computation and Simulation, 76 (12), 1049–1064. · Zbl 1144.62332 · doi:10.1080/10629360600810434
[55] WINKLER, W. E. (1999), ”State of Statistical Data Editing and Current Research Problems”, Working paper No 29 in the UN/ECE Work Session on Statistical Data Editing, Rome, 2-4 June 1999.
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.