×

Post-hoc analyses in multiple regression based on prediction error. (English) Zbl 1206.62128

Summary: A well-known problem in multiple regression is that it is possible to reject the hypothesis that all slope parameters are equal to zero, yet when applying the usual Student \(T\)-test to the individual parameters, no significant differences are found. An alternative strategy is to estimate the prediction error via the 0.632 bootstrap method for all models of interest and declare the parameters associated with the model that yields the smallest prediction error to differ from zero. The main results in this paper are that this latter strategy can have practical value versus Student’s \(T\); replacing squared error with absolute error can be beneficial in some situations and replacing least squares with an extension of the Theil-Sen estimator [H. Theil, Indag. Math. 12, 85–91 (1950; Zbl 0036.21601); P. K. Sen, J. Am. Stat. Assoc. 63, 1379–1389 (1968; Zbl 0167.47202)] can substantially increase the probability of identifying the correct model under circumstances that are described.

MSC:

62J05 Linear regression; mixed models
62J15 Paired and multiple comparisons; multiple testing
62F03 Parametric hypothesis testing

Software:

bootstrap
PDFBibTeX XMLCite
Full Text: DOI

References:

[1] DOI: 10.2307/2291140 · Zbl 0821.62040 · doi:10.2307/2291140
[2] DOI: 10.2307/1269730 · Zbl 0862.62059 · doi:10.2307/1269730
[3] DOI: 10.2307/2290776 · Zbl 0783.62024 · doi:10.2307/2290776
[4] Dayton C. M., Amer. Statist. 52 pp 144– (1998) · Zbl 04546816 · doi:10.2307/2685473
[5] Derksen S., British J. Math. Statist. Psych. 45 pp 265– (1992) · doi:10.1111/j.2044-8317.1992.tb00992.x
[6] Dietz E. J., Comm. Statist. Simulation Comput. 16 pp 1209– (1987) · Zbl 0695.62156 · doi:10.1080/03610918708812645
[7] Dietz E. J., Amer. Statist. 43 pp 35– (1989) · doi:10.2307/2685167
[8] DOI: 10.2307/2288636 · Zbl 0543.62079 · doi:10.2307/2288636
[9] DOI: 10.1214/009053604000000067 · Zbl 1091.62054 · doi:10.1214/009053604000000067
[10] Efron, B. and Tibshirani, R. J. 1993. ”An Introduction to the Bootstrap”. New York: Chapman and Hall. · Zbl 0835.62038
[11] Fairley D., Amer. Statist. 40 pp 138– (1986) · doi:10.2307/2684873
[12] Hoaglin D. C., Exploring Data Tables, Trends and Shapes (1985) · Zbl 0659.62002
[13] Miller, A. J. 1990. ”Subset Selection in Regression”. London: Chapman and Hall. · Zbl 0702.62057
[14] Montgomery, D. C. and Peck, E. A. 1992. ”Introduction to Linear Regression Analysis”. New York: Wiley. · Zbl 0850.62529
[15] Rousseeuw, P. J. and Leroy, A. M. 1987. ”Robust Regression & Outlier Detection”. New York: Wiley. · Zbl 0711.62030
[16] DOI: 10.2307/2285891 · Zbl 0167.47202 · doi:10.2307/2285891
[17] Shao J., J. Amer. Statist. Assoc. 91 pp 655– (1996) · Zbl 0869.62030 · doi:10.2307/2291661
[18] Staudte, R. G. and Sheather, S. J. 1990. ”Robust Estimation and Testing”. New York: Wiley. · Zbl 0706.62037
[19] Theil H., Indag. Math. 12 pp 85– (1950)
[20] Weisberg S., Ann. Statist. 32 pp 490– (2004)
[21] Wilcox R. R., Academic Press pp 537– (2003)
[22] Wilcox R. R., British J. Math. Statist. Psych. 57 pp 265– (2004) · doi:10.1348/0007110042307230
[23] Wilcox, R. R. 2005. ”Introduction to Robust Estimation and Hypothesis Testing”. San Diego, CA: Academic Press. · Zbl 1113.62036
This reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. In some cases that data have been complemented/enhanced by data from zbMATH Open. This attempts to reflect the references listed in the original paper as accurately as possible without claiming completeness or a perfect matching.