UTCS Artificial Intelligence
courses
talks/events
demos
people
projects
publications
software/data
labs
areas
admin
On NDCG Consistency of Listwise Ranking Methods (2011)
Pradeep Ravikumar
, Ambuj Tewari and
Eunho Yang
We study the consistency of listwise ranking methods with respect to the popular Normalized Discounted Cumulative Gain (NDCG) criterion. State of the art listwise approaches replace NDCG with a surrogate loss that is easier to optimize. We characterize NDCG consistency of surrogate losses to discover a surprising fact: several commonly used surrogates are NDCG inconsistent. We then show how to modify them so that they become NDCG consistent. We then state a stronger but more natural notion of strong NDCG consistency, and surprisingly are able to provide an explicit characterization of all strongly NDCG consistent surrogates. Going beyond qualitative consistency considerations, we also give quantitive statements that enable us to transform the excess error, as measured in the surrogate, to the excess error in comparison to the Bayes optimal ranking function for NDCG. Finally, we also derive improved results if a certain natural "low noise" or "large margin" condition holds. Our experiments demonstrate that ensuring NDCG consistency does improve the performance of listwise ranking methods on real-world datasets. Moreover, a novel surrogate function suggested by our theoretical results leads to further improvements over even NDCG consistent versions of existing surrogates.
View:
PDF
Citation:
International Conference on AI and Statistics (AISTATS)
(2011).
Bibtex:
@article{RTY11, title={On NDCG Consistency of Listwise Ranking Methods}, author={Pradeep Ravikumar and Ambuj Tewari and Eunho Yang}, booktitle={International Conference on AI and Statistics (AISTATS)}, series={14}, url="http://www.cs.utexas.edu/users/ai-lab?RTY11", year={2011} }
People
Pradeep Ravikumar
Formerly affiliated Faculty
pradeepr [at] cs utexas edu
Eunho Yang
Ph.D. Alumni
eunho [at] cs utexas edu
Areas of Interest
Machine Learning
Ranking
Statistical Learning