• Classified by Topic • Classified by Publication Type • Sorted by Date • Sorted by First Author Last Name • Classified by Funding Source •
Opportunistic Active Learning for Grounding Natural Language Descriptions.
Jesse
         Thomason, Aishwarya Padmakumar, Jivko Sinapov, Justin
         Hart, Peter Stone, and Raymond
         J. Mooney.
In Proceedings of the 1st Annual Conference on Robot Learning (CoRL-17), pp. 67–76, PMLR, Mountain
         View, California, November 2017.
      
Active learning identifies data points from a pool of unlabeled examples whose labels, if made available, are most likely to improve the predictions of a supervised model. Most research on active learning assumes that an agent has access to the entire pool of unlabeled data and can ask for labels of any data points during an initial training phase. However, when incorporated in a larger task, an agent may only be able to query some subset of the unlabeled pool. An agent can also opportunistically query for labels that may be useful in the future, even if they are not immediately relevant. In this paper, we demonstrate that this type of opportunistic active learning can improve performance in grounding natural language descriptions of everyday objects---an important skill for home and office robots. We find, with a real robot in an object identification setting, that inquisitive behavior---asking users important questions about the meanings of words that may be off-topic for the current dialog---leads to identifying the correct object more often over time.
@inproceedings{CORL17-thomason,
title={Opportunistic Active Learning for Grounding Natural Language Descriptions},
author={Jesse Thomason and Aishwarya Padmakumar and Jivko Sinapov and Justin Hart and Peter Stone and Raymond J. Mooney},
booktitle={Proceedings of the 1st Annual Conference on Robot Learning (CoRL-17)},
month={November},
editor={Sergey Levine and Vincent Vanhoucke and Ken Goldberg},
address={Mountain View, California},
publisher={PMLR},
pages={67--76},
url="http://www.cs.utexas.edu/users/ai-lab/pub-view.php?PubID=127657",
year={2017},
abstract={
      Active learning identifies data points from a pool of unlabeled
      examples whose labels, if made available, are most likely to improve the
      predictions of a supervised model. Most research on active learning assumes
      that an agent has access to the entire pool of unlabeled data and can ask
      for labels of any data points during an initial training phase. However,
      when incorporated in a larger task, an agent may only be able to query
      some subset of the unlabeled pool. An agent can also opportunistically
      query for labels that may be useful in the future, even if they are not
      immediately relevant. In this paper, we demonstrate that this type of
      opportunistic active learning can improve performance in grounding
      natural language descriptions of everyday objects---an important skill
      for home and office robots. We find, with a real robot in an object
      identification setting, that inquisitive behavior---asking users
      important questions about the meanings of words that may be
      off-topic for the current dialog---leads to identifying the correct
      object more often over time.},
location={Mountain View, CA}
}
Generated by bib2html.pl (written by Patrick Riley ) on Thu Oct 23, 2025 16:14:17