Department of Computer Science

Machine Learning Research Group

University of Texas at Austin Artificial Intelligence Lab

Publications: Ensemble Learning

Ensemble Learning combines multiple learned models under the assumption that "two (or more) heads are better than one." The decisions of multiple hypotheses are combined in ensemble learning to produce more accurate results. Boosting and bagging are two popular approaches. Our work focuses on building diverse committees that are more effective than those built by existing methods, and, in particular, are useful for active learning.

For a general, popular book on the utility of combining diverse, independent opinions in human decision-making, see The Wisdom of Crowds.

  1. Explainable Improved Ensembling for Natural Language and Vision
    [Details] [PDF] [Slides (PPT)] [Slides (PDF)]
    Nazneen Rajani
    PhD Thesis, Department of Computer Science, The University of Texas at Austin, July 2018.
  2. Stacking With Auxiliary Features for Visual Question Answering
    [Details] [PDF] [Poster]
    Nazneen Fatema Rajani, Raymond J. Mooney
    In Proceedings of the 16th Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2217-2226, 2018.
  3. Ensembling Visual Explanations for VQA
    [Details] [PDF] [Poster]
    Nazneen Fatema Rajani, Raymond J. Mooney
    In Proceedings of the NIPS 2017 workshop on Visually-Grounded Interaction and Language (ViGIL), December 2017.
  4. Using Explanations to Improve Ensembling of Visual Question Answering Systems
    [Details] [PDF] [Poster]
    Nazneen Fatema Rajani and Raymond J. Mooney
    In Proceedings of the IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI), 43-47, Melbourne, Australia, August 2017.
  5. Stacking With Auxiliary Features
    [Details] [PDF] [Slides (PDF)] [Poster]
    Nazneen Fatema Rajani and Raymond J. Mooney
    In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI-17), 2634-2640, Melbourne, Australia, 2017.
  6. Stacking With Auxiliary Features for Combining Supervised and Unsupervised Ensembles
    [Details] [PDF]
    Nazneen Fatema Rajani and Raymond J. Mooney
    In Proceedings of the Ninth Text Analysis Conference (TAC 2016), 2016.
  7. Stacking With Auxiliary Features
    [Details] [PDF]
    Nazneen Fatema Rajani and Raymond J. Mooney
    ArXiv preprint arXiv:1605.08764, 2016.
  8. Combining Supervised and Unsupervised Ensembles for Knowledge Base Population
    [Details] [PDF]
    Nazneen Fatema Rajani and Raymond J. Mooney
    To Appear In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (EMNLP-16), 2016.
  9. Stacked Ensembles of Information Extractors for Knowledge-Base Population by Combining Supervised and Unsupervised Approaches
    [Details] [PDF] [Slides (PDF)]
    Nazneen Fatema Rajani and Raymond J Mooney
    In Proceedings of the Eighth Text Analysis Conference (TAC 2015), November 2015.
  10. Knowledge Transfer Using Latent Variable Models
    [Details] [PDF] [Slides (PDF)]
    Ayan Acharya
    PhD Thesis, Department of Electrical and Computer Engineering, The University of Texas at Austin, August 2015.
  11. Stacked Ensembles of Information Extractors for Knowledge-Base Population
    [Details] [PDF] [Slides (PPT)]
    Vidhoon Viswanathan and Nazneen Fatema Rajani and Yinon Bentor and Raymond J. Mooney
    In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL-15), 177-187, Beijing, China, July 2015.
  12. Knowledge Base Population using Stacked Ensembles of Information Extractors
    [Details] [PDF]
    Vidhoon Viswanathan
    Masters Thesis, Department of Computer Science, The University of Texas at Austin, May 2015.
  13. Creating Diverse Ensemble Classifiers to Reduce Supervision
    [Details] [PDF]
    Prem Melville
    PhD Thesis, Department of Computer Sciences, University of Texas at Austin, November 2005. 141 pages. Technical Report TR-05-49.
  14. Combining Bias and Variance Reduction Techniques for Regression
    [Details] [PDF]
    Yuk Lai Suen, Prem Melville and Raymond J. Mooney
    Technical Report UT-AI-TR-05-321, University of Texas at Austin, July 2005. www.cs.utexas.edu/~ml/publication.
  15. Combining Bias and Variance Reduction Techniques for Regression
    [Details] [PDF]
    Y. L. Suen, P. Melville and Raymond J. Mooney
    In Proceedings of the 16th European Conference on Machine Learning, 741-749, Porto, Portugal, October 2005.
  16. Diverse Ensembles for Active Learning
    [Details] [PDF]
    Prem Melville and Raymond J. Mooney
    In Proceedings of 21st International Conference on Machine Learning (ICML-2004), 584-591, Banff, Canada, July 2004.
  17. Experiments on Ensembles with Missing and Noisy Data
    [Details] [PDF]
    Prem Melville, Nishit Shah, Lilyana Mihalkova, and Raymond J. Mooney
    In F. Roli, J. Kittler, and T. Windeatt, editors, {Lecture Notes in Computer Science:} Proceedings of the Fifth International Workshop on Multi Classifier Systems (MCS-2004), 293-302, Cagliari, Italy, June 2004. Springer Verlag.
  18. Creating Diversity in Ensembles Using Artificial Data
    [Details] [PDF]
    Prem Melville and Raymond J. Mooney
    Journal of Information Fusion: Special Issue on Diversity in Multi Classifier Systems, 6(1):99-111, 2004.
  19. Creating Diverse Ensemble Classifiers
    [Details] [PDF]
    Prem Melville
    Technical Report UT-AI-TR-03-306, Department of Computer Sciences, University of Texas at Austin, December 2003. Ph.D. proposal.
  20. Constructing Diverse Classifier Ensembles Using Artificial Training Examples
    [Details] [PDF]
    Prem Melville and Raymond J. Mooney
    In Proceedings of the Eighteenth International Joint Conference on Artificial Intelligence (IJCAI-2003), 505-510, Acapulco, Mexico, August 2003.