Improving Grounded Natural Language Understanding through Human-Robot Dialog (2019)
Natural language understanding for robotics can require substantial domain- and platform-specific engineering. For example, for mobile robots to pick-and-place objects in an environment to satisfy human commands, we can specify the language humans use to issue such commands, and connect concept words like red can to physical object properties. One way to alleviate this engineering for a new domain is to enable robots in human environments to adapt dynamically -- continually learning new language constructions and perceptual concepts. In this work, we present an end-to-end pipeline for translating natural language commands to discrete robot actions, and use clarification dialogs to jointly improve language parsing and concept grounding. We train and evaluate this agent in a virtual setting on Amazon Mechanical Turk, and we transfer the learned agent to a physical robot platform to demonstrate it in the real world.
View:
PDF
Citation:
In IEEE International Conference on Robotics and Automation (ICRA), Montreal, Canada, May 2019.
Bibtex:

Justin Hart Postdoctoral Fellow hart [at] cs utexas edu
Yuqian Jiang Ph.D. Student
Raymond J. Mooney Faculty mooney [at] cs utexas edu
Aishwarya Padmakumar Ph.D. Student aish [at] cs utexas edu
Jivko Sinapov Postdoctoral Alumni jsinapov [at] cs utexas edu
Peter Stone Faculty pstone [at] cs utexas edu
Jesse Thomason Ph.D. Alumni thomason DOT jesse AT gmail
Nick Walker Undergraduate Alumni nswalker [at] cs uw edu
Harel Yedidsion Postdoctoral Fellow harel [at] cs utexas edu