Abstract
The construction of a reliable, practically useful prediction rule for future response is heavily dependent on the "adequacy" of the fitted regression model. In this article, we consider the absolute prediction error, the expected value of the absolute difference between the future and predicted responses, as the model evaluation criterion. This prediction error is easier to interpret than the average squared error and is equivalent to the mis-classification error for the binary outcome. We show that the distributions of the apparent error and its cross-validation counterparts are approximately normal even under a misspecified fitted model. When the prediction rule is "unsmooth", the variance of the above normal distribution can be estimated well via a perturbation-resampling method. We also show how to approximate the distribution of the difference of the estimated prediction errors from two competing models. With two real examples, we demonstrate that the resulting interval estimates for prediction errors provide much more information about model adequacy than the point estimates alone.
Disciplines
Biostatistics | Statistical Models
Suggested Citation
Tian, Lu; Cai, Tianxi; Goetghebeur, Els; and Wei, L. J., "Model Evaluation Based on the Distribution of Estimated Absolute Prediction Error" (November 2005). Harvard University Biostatistics Working Paper Series. Working Paper 35.
https://biostats.bepress.com/harvardbiostat/paper35