Jointly Improving Parsing and Perception for Natural Language Commands through Human-Robot Dialog (2020)
Humans use natural language to articulate their thoughts and intentions to other people, making it a natural channel for human-robot communication. Natural language understanding in robots needs to be robust to a wide-range of both human speakers and environments. In this work, we present methods for parsing natural language to underlying meanings and using robotic sensors to create multi-modal models of perceptual concepts. Through dialog, robots should learn new language constructions and perceptual concepts as they are used in context. We develop an agent for jointly improving parsing and perception in simulation through human-robot dialog, and demonstrate this agent on a robotic platform. Dialog clarification questions are used both to understand commands and to generate additional parsing training data. The agent improves its perceptual concept models through questions about how words relate to objects. We evaluate this agent on Amazon Mechanical Turk. After training on induced data from conversations, the agent can reduce the number of clarification questions asked while receiving higher usability ratings. Additionally, we demonstrate the agent on a robotic platform, where it learns new concepts on the fly while completing a real-world task.
View:
PDF
Citation:
The Journal of Artificial Intelligence Research (JAIR), Vol. 67 (2020), pp. 327-374.
Bibtex:

Justin Hart Postdoctoral Fellow hart [at] cs utexas edu
Yuqian Jiang Ph.D. Student
Raymond J. Mooney Faculty mooney [at] cs utexas edu
Aishwarya Padmakumar Ph.D. Alumni aish [at] cs utexas edu
Jivko Sinapov Postdoctoral Alumni jsinapov [at] cs utexas edu
Peter Stone Faculty pstone [at] cs utexas edu
Jesse Thomason Ph.D. Alumni thomason DOT jesse AT gmail
Nick Walker Undergraduate Alumni nswalker [at] cs uw edu
Harel Yedidsion Postdoctoral Fellow harel [at] cs utexas edu
Harel Yedidsion Postdoctoral Alumni harel [at] cs utexas edu