UTCS AI Colloquia - Justin Hart, Yale University, "Robot Self Modeling"

Contact Name: 
Karl Pichotta
PAI 3.14
Apr 19, 2013 11:00am - 12:00pm

Signup Schedule: http://apps.cs.utexas.edu/talkschedules/cgi/list_events.cgi

Talk Audience: UTCS Faculty, Grads, Undergrads, Other Interested Parties

Host:  Peter Stone

Talk Abstract: Traditionally, models of a robot's kinematics and sensors have been provided by designers through manual processes.  These models are used for sensorimotor tasks, such as manipulation and stereo vision.  However, traditional techniques yield static models based on one-time calibrations or ideal engineering drawings models that often fail to represent the actual hardware, or in which individual unimodal models, such as those describing kinematics and vision, may disagree with each other.  My research instead constructs robots that learn unified models of themselves adaptively and online.  My robot, Nico, creates a highly-accurate self-representation through experience, and is able to use this self-representation for novel tasks, such as inferring the perspective of a mirror by watching its own motion reflected therein.  This represents an important step in the disciplined study of self-awareness in robotic systems.

Speaker Bio: Justin Hart is a Ph.D. Candidate in the Department of Computer Science at Yale University, where he is advised by Professor Brian Scassellati. His research focuses on robotic self-modeling, in which robots learn models of their bodies and senses through data sampled during operation. He also has performed signifcant work in human-robot interaction, including studies on creating trust by manipulating social presence, attributions of agency, and the creation of lifelike motion. His work has recently been featured in the Society of Manufacturing Engineers Innovation Watch List, and has appeared in media outlets such as New Scientist, BBC News, GE Focus Foward Films, and Google Solve for X.