Abstract

Many statistical methods exist that can be used to learn a predictor based on observed data. Examples include decision trees, neural networks, support vector regression, least angle regression, Logic Regression, and the Deletion/Substitution/Addition algorithm. The optimal algorithm for prediction will vary depending on the underlying data-generating distribution. In this article, we introduce a "super learner," a prediction algorithm that applies any set of candidate learners and uses cross-validation to select among them. Theory shows that asymptotically the super learner performs essentially as well or better than any of the candidate learners. We briefly present the theory behind the super learner, before providing an example based on research aimed at predicting the in vitro phenotypic susceptibility of the HIV virus to antiretroviral drugs based on viral mutations. We apply the super learner to predict susceptibility to one protease inhibitor, nelfinavir, using a set of database-derived nonpolymorphic treatment-selected protease mutations.

Disciplines

Statistical Models

Share

COinS