Opportunistic Active Learning for Grounding Natural Language Descriptions (2017)
Jesse Thomason, Aishwarya Padmakumar, Jivko Sinapov, Justin Hart, Peter Stone, and Raymond J. Mooney
Active learning identifies data points from a pool of unlabeled examples whose labels, if made available, are most likely to improve the predictions of a supervised model. Most research on active learning assumes that an agent has access to the entire pool of unlabeled data and can ask for labels of any data points during an initial training phase. However, when incorporated in a larger task, an agent may only be able to query some subset of the unlabeled pool. An agent can also opportunistically query for labels that may be useful in the future, even if they are not immediately relevant. In this paper, we demonstrate that this type of opportunistic active learning can improve performance in grounding natural language descriptions of everyday objects---an important skill for home and office robots. We find, with a real robot in an object identification setting, that inquisitive behavior---asking users important questions about the meanings of words that may be off-topic for the current dialog---leads to identifying the correct object more often over time.
View:
PDF
Citation:
In In Proceedings of the 1st Annual Conference on Robot Learning (CoRL-17), Sergey Levine and Vincent Vanhoucke and Ken Goldberg (Eds.), pp. 67--76, Mountain View, California, November 2017. PMLR.
Bibtex:

Raymond J. Mooney Faculty mooney [at] cs utexas edu
Aishwarya Padmakumar Ph.D. Student aish [at] cs utexas edu
Jesse Thomason Ph.D. Student jesse [at] cs utexas edu