Ensemble methods like bagging and boosting that combine the decisions of multiple hypotheses are some of the strongest existing machine learning methods. The diversity of the members of an ensemble is known to be an important factor in determining its generalization error. This paper presents a new method for generating ensembles that directly constructs diverse hypotheses using additional artificially-constructed training examples. The technique is a simple, general meta-learner that can use any strong learner as a base classifier to build diverse committees. Experimental results using decision-tree induction as a base learner demonstrate that this approach consistently achieves higher predictive accuracy than both the base classifier and bagging (whereas boosting can occasionally decrease accuracy), and also obtains higher accuracy than boosting early in the learning curve when training data is limited.
In Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI-2003), pp. 505-510, Acapulco, Mexico, August 2003.

Prem Melville Ph.D. Alumni pmelvi [at] us ibm com
Raymond J. Mooney Faculty mooney [at] cs utexas edu