Active Learning for Probability Estimation using Jensen-Shannon Divergence (2005)
P. Melville, S. M. Yang, M. Saar-Tsechansky and Raymond J. Mooney
Active selection of good training examples is an important approach to reducing data-collection costs in machine learning; however, most existing methods focus on maximizing classification accuracy. In many applications, such as those with unequal misclassification costs, producing good class probability estimates (CPEs) is more important than optimizing classification accuracy. We introduce novel variations of two extant active-learning algorithms, Boostrap-LV and ACTIVEDECORATE, by using Jensen-Shannon divergence (a similarity measure for probability distributions) to improve sample selection for optimizing CPEs. Comprehensive experimental results demonstrate the benefits of our enhancements.
View:
PDF, PS
Citation:
In Proceedings of the 16th European Conference on Machine Learning, pp. 268--279, Porto, Portugal, October 2005.
Bibtex:

Prem Melville Ph.D. Alumni pmelvi [at] us ibm com
Raymond J. Mooney Faculty mooney [at] cs utexas edu
Meng (Stewart) Yang Masters Alumni windtown [at] cs utexas edu