Using Human-Inspired Signals to Disambiguate Navigational Intentions (2020)
Justin Hart, Reuth Mirsky, Xuesu Xiao, Stone Tejeda, Bonny Mahajan, Jamin Goo, Kathryn Baldauf, Sydney Owen, and Peter Stone
People are proficient at communicating their intentions in order to avoid conflicts when navigating in narrow, crowded environments. Mobile robots, on the other hand, often lack both the ability to interpret human intentions and the ability to clearly communicate their own intentions to people sharing their space. This work addresses the second of these points, leveraging insights about how people implicitly communicate with each other through gaze to enable mobile robots to more clearly signal their navigational intention. We present a human study measuring the importance of gaze in coordinating people's navigation. This study is followed by the development of a virtual agent head which is added to a mobile robot platform. Comparing the performance of a robot with a virtual agent head against one with an LED turn signal demonstrates its ability to impact people's navigational choices, and that people more easily interpret the gaze cue than the LED turn signal.
View:
PDF
Citation:
In Proceedings of the 12th International Conference on Social Robotics (ICSR), Golden, Colorado, November 2020.
Bibtex:

Justin Hart Postdoctoral Fellow hart [at] cs utexas edu
Peter Stone Faculty pstone [at] cs utexas edu