Boosting is a machine learning meta-algorithm for performing supervised learning. Boosting occurs in stages, by incrementally adding to the current learned function. At every stage, a weak learner (i.e., one that can have an accuracy as bad as slightly greater than chance) is trained with the data. The output of the weak learner is then added to the learned function, with some strength (proportional to how accurate the weak learner is). Then, the data is reweighted: examples that the current learned function get wrong are "boosted" in importance, so that future weak learners will attempt to fix the errors.
There are several different boosting algorithms, depending on the exact mathematical form of the strength and weight. One of the most common boosting algorithms is AdaBoost. Most boosting algorithms fit into the AnyBoost framework, which shows that boosting performs gradient descent in function space.
Algorithmically, boosting is related to
- Robert E. Schapire and Yoram Singer. Improved Boosting Algorithms Using Confidence-Rated Predictors. Machine Learning, 37(3):297--336, 1999. http://citeseer.nj.nec.com/schapire99improved.html
- Robert E. Schapire. The Strength of Weak Learnability. Machine Learning, 5(2):197--227, 1990. http://citeseer.ist.psu.edu/schapire90strength.html
- Llew Mason, Jonathan Baxter, Peter Bartlett, Marcus Frean, Boosting Algorithms as Gradient Descent in Function Space. Technical report, RSISE, Australian National University, 1999. http://citeseer.ist.psu.edu/mason99boosting.html