Ensemble Learning combines multiple learned models under the assumption
that "two (or more) heads are better than one." The decisions of multiple
hypotheses are combined in ensemble learning to produce more accurate results.
Boosting and bagging are two popular approaches. Our work focuses on building
diverse committees that are more effective than those built by existing
methods, and, in particular, are useful for
active learning.
For a general, popular book on the utility of combining diverse, independent
opinions in human decision-making, see The Wisdom of Crowds.