• Reasoning and Planning for Mobile Robots

    To be deemed intelligent in human-robot coexisting environments, robots need the capability of representing and reasoning with commonsense knowledge. Such knowledge is normally true, but not always. For instance, people prefer coffee in the morning; and office doors are closed during the holidays. We use answer set programming (ASP) and its extensions for commonsense reasoning and partially observable Markov decision processes (POMDPs) for probabilistic planning. The goal of this research is to develop algorithms to enable robots to plan under uncertainty while reason with commonsense knowledge at the same time.

    Representative Publication: CORPP: Commonsense Reasoning and Probabilistic Planning, as Applied to Dialog with a Mobile Robot. (AAAI 2015)

  • Human-Robot Interaction

    Robots are becoming increasingly sophisticated, and are bound to become per-vasive in humans’ every-day lives. To effectively collaborate with humans, it is useful for a robot to understand their activities and intentions automatically. This understanding is especially important in human-robot interaction scenarios: if the robot can properly interpret the behavior of humans, its communication with them will be facilitated, and its ability to interact with them will improve. Therefore, the aim of this research is to develop new methods that would allow a robot to effectively recognize human activities, intentions, etc. and appropriately react to them.

    Representative Publication: Robot-centric activity recognition ‘in the wild’. (ICSR 2015)

  • Multi-Robot Coordination and Guidance

    In this research, we demonstrate how individual service robots in a multi-robot system can be temporarily reassigned from their original task to help guide a human from one location to another in the environment. We formulate this multi-robot treatment of the human guidance problem as a Markov Decision Process (MDP), and explore how different MDP planners can be used to solve this problem. Our long term goal for this research is to expand the MDP to a general framework for efficiently interrupting robots performing background service tasks to efficiently aid humans as necessary.

    Representative Publication: Leading the Way: An Efficient Multi-robot Guidance System (AAMAS 2015)

  • Grounded Language Learning

    Humans use natural language to articulate their thoughts and intentions to other people, making it a natural channel for human-robot communication. We demonstrate methods for parsing natural language to underlying meanings, and using robotic sensors to create multi-modal models of perceptual concepts (e.g. ``Bring the heavy mug to Bob's office.''). Our robots use dialogs with humans to improve both parsing and perception. Our long-term goal with this research is to enable lifelong learning of new language constructions and perceptual concept models while accomplishing tasks. We have implemented a language learning agent on a Segbot V2 robot equipped with a Kinova arm, thus incorporating speech, perception, navigation and manipulation in a single autonomous robot.

    A demo of this system’s abilities can be viewed here

    Representative Publications:

    Guiding Exploratory Behaviors for Multi-Modal Grounding of Linguistic Descriptions (AAAI 2018)

    Guiding Interaction Behaviors for Multi-modal Grounded Language Learning (ACL 2017)

    Learning Multi-Modal Grounded Linguistic Semantics by Playing I, Spy (IJCAI 2016)

    Learning to Interpret Natural Language Commands through Human-Robot Dialog (IJCAI 2015)