Interactive Multi-Sensory Object Perception for Embodied Agents

AAAI Spring Symposium Series

March 27-29, 2017, Stanford University

Overview [top]

For a robot to perceive object properties with multiple sensory modalities, it needs to interact with the object through action. This interaction requires that an agent be embodied (i.e., the robot interacts with the environment through a physical body within that environment). A major challenge is to get a robot to interact with the scene in a way that is quick and efficient. Furthermore, learning to perceive and reason about objects in terms of multiple sensory modalities remains a longstanding challenge in robotics. Multiple lines of evidence from the fields of psychology and cognitive science have demonstrated that humans rely on multiple senses (e.g., audio, haptics, tactile, etc.) in a broad variety of contexts ranging from language learning to learning manipulation skills. Nevertheless, most object representations used by robots today rely solely on visual input (e.g., a 3D object model) and thus, cannot be used to learn or reason about non-visual object properties (weight, texture, etc.).

This major question we want to address is, how do we collect large datasets from robots exploring the world with multi-sensory inputs and what algorithms can we use to learn and act with this data? For instance, at several major universities, there are robots that can operate autonomously (e.g., navigate throughout the building, manipulate objects, etc.) for long periods of time. Such robots could potentially generate large amount of multi-modal sensory data, coupled with the robot's actions. While the community has focused on how to deal with visual information (e.g., deep learning for visual features from large scale databases), there has been far fewer explorations of how to utilize and learn from the very different scales of data collected from very different sensors. Specific challenges include the fact that different sensors produce data at different sampling rates and different resolutions. Furthermore, data produced by a robot acting in the world is typically not independently and identically distributed (a common assumption of machine learning algorithms) as the current data point often depends on previous actions.

Questions of Interest:

Important Dates [top]

October 1st, 2016: Submissions open
October 28th November 11th, 2016: Deadline for submissions (extended due to multiple requests)
November 25th, 2016 December 2nd, 2016: Notification of Acceptance Decisions
March 27-29, 2017: Symposium

Registration Information[top]

Registration Form: https://www.regonline.com/sss17

All accepted authors, invited speakers, symposium participants, and other invited attendees must register by February 17, 2017. Participation will be open to active participants as well as interested individuals on a first-come, first-served basis. All registrations should be completed by March 10, 2017. Registrations will be accepted until March 27 via the online form, but earlier registration is preferred.

Program [top]

Invited Speakers:

Dieter Fox
Department of Computer Science and Engineering
University of Washington

Allison Yamanashi Leib
Department of Psychology
UC Berkeley

Charlie Kemp
Department of Biomedical Engineering
Georgia Tech

Katherine J. Kuchenbecker
Haptic Intelligence Department, Max Planck Institute for Intelligent Systems
and Mechanical Engineering and Applied Mechanics Department, University of Pennsylvania

Oliver Brock
TU Berlin

Moqian Tian
Meta Company

Alexander Stoytchev
Department of Electrical and Computer Engineering
Iowa State University

Byron Boots
School of Interactive Computing
Georgia Tech


Schedule:

March 27th

8:30 - 9:00 Check-in
9:00 - 9:45 Symposium Introduction
9:45 - 10:30 Invited Speaker: Alexander Stoytchev Bootstrapping Common Sense: A Developmental Approach to Robotics
10:30-11:00 Coffee Break / Discussion
11:00-11:45 Poster Spotlight Presentations
11:45-12:30 Invited Speaker: Byron Boots TBD
12:30-2:00 Group Lunch @ Stanford Food Court
2:00-2:45 Invited Speaker: Moqian Tian Building Multimodal Robots: What Can We Borrow From Neuroscience?
2:45-3:30 Poster Session #1
3:30-4:00 Coffee Break / Discussion
4:00-5:00 Breakout Session #1
5:00-5:30 Day 1 Summary / Discussion
6:00-7:00 Reception

March 28th

9:00 - 9:45 Invited Speaker: Charlie Kemp Multimodal Sensing for Assistive Robots
9:45-10:30 Invited Speaker: Allison Yamanashi Leib Human Visual Perception of Complex Environments
10:30-11:00 Coffee Break / Discussion
11:00-11:45 Invited Speaker: Oliver Brock A Pattern for Perception: From Multi-Modal Sensor Data to Task-Relevant Information
11:45-12:30 Selected Paper Presentations
12:30-2:00 Group Lunch @ Stanford Food Court
2:00-2:45 Invited Speaker: Katherine Kuchenbecker Haptic Intelligence in Robotics
2:45-3:30 Invited Speaker: Dieter Fox TBD
3:30-4:00 Coffee Break / Discussion
4:00-4:45 Poster Session #2
4:45-5:30 Breakout Session #2
6:00-7:00 AAAI Spring Symposium Series Plenary

March 29th

9:00 - 9:30 Invited Speaker: Jivko Sinapov Learning From and About Humans: A Multi-Modal Approach
9:45-10:30 Breakout Session / Closing Remarks
10:30-11:00 Coffee Break
11:00-12:30 Group Brunch
2:00 Outing / Sightseeing Activity (TBD)

Accepted Papers [top]

Poster Session I (March 27th, 2:45 - 3:00 pm):

Poster Session II (March 28th, 4:00 - 4:45 pm):

Paper Submissions [top]

We welcome student abstract submissions describing prior or ongoing work related to multisensory perception and embodied agents. Types of submission may include (but not limited to):

Submissions should be 2-4 pages in length, plus an extra page for references, in AAAI format.

Submissions should be sent by email to aaai2017sss.imopea -- AT -- gmail.com by November 11th, 2016.

Exceptional submissions will be invited to submit a full paper to a special journal issue on the topic of the symposium.

Organizers [top]

Vivian Chu
http://www.cc.gatech.edu/~vchu7
Ph.D. Candidate
School of Interactive Computing, Georgia Institute of Technology

Jivko Sinapov
http://www.cs.utexas.edu/~jsinapov
Clinical Assistant Professor (a.k.a. glorified post-doc)
Department of Computer Science, University of Texas at Austin

Jeannette Bohg
https://am.is.tuebingen.mpg.de/person/jbohg
Senior Research Scientist
Autonomous Motion Department, MPI for Intelligent Systems, Tübingen, Germany

Sonia Chernova
http://www.cc.gatech.edu/~chernova/
Catherine M. and James E. Allchin Early-Career Assistant Professor
School of Interactive Computing, Georgia Institute of Technology

Andrea L. Thomaz
www.ece.utexas.edu/speakers/andrea-l-thomaz
Associate Professor
Department of Computer Science, University of Texas at Austin