Learning Multi-Modal Grounded Linguistic Semantics by Playing "I Spy" (2016)
Grounded language learning bridges words like 'red' and 'square' with robot perception. The vast majority of existing work in this space limits robot perception to vision. In this paper, we build perceptual models that use haptic, auditory, and proprioceptive data acquired through robot exploratory behaviors to go beyond vision. Our system learns to ground natural language words describing objects using supervision from an interactive human-robot "I Spy" game. In this game, the human and robot take turns describing one object among several, then trying to guess which object the other has described. All supervision labels were gathered from human participants physically present to play this game with a robot. We demonstrate that our multi-modal system for grounding natural language outperforms a traditional, vision-only grounding framework by comparing the two on the "I Spy" task. We also provide a qualitative analysis of the groundings learned in the game, visualizing what words are understood better with multi-modal sensory information as well as identifying learned word meanings that correlate with physical object properties (e.g. 'small' negatively correlates with object weight).
View:
PDF, HTML
Citation:
In Proceedings of the 25th International Joint Conference on Artificial Intelligence (IJCAI-16), pp. 3477--3483, New York City 2016.
Bibtex:

Raymond J. Mooney Faculty mooney [at] cs utexas edu
Jivko Sinapov Postdoctoral Alumni jsinapov [at] cs utexas edu
Peter Stone Faculty pstone [at] cs utexas edu
Jesse Thomason Ph.D. Alumni thomason DOT jesse AT gmail