Forum for Artificial Intelligence


Forum for Artificial Intelligence

[ About FAI   |   Upcoming talks   |   Past talks ]



The Forum for Artificial Intelligence meets every other week (or so) to discuss scientific, philosophical, and cultural issues in artificial intelligence. Both technical research topics and broader inter-disciplinary aspects of AI are covered, and all are welcome to attend!

If you would like to be added to the FAI mailing list, subscribe here. If you have any questions or comments, please send email to Karl Pichotta or Craig Corcoran.

The current schedule is also available as a Google Calendar or alternatively in iCal format.



[ Upcoming talks ]

Fri, September 19
11:00AM
GDC 6.302
Jivko Sinapov
University of Texas at Austin
Behavior-Grounded Multisensory Object Perception and Exploration by a Humanoid Robot
Fri, October 3
11:00AM
GDC 6.302
George Konidaris
Duke University
Robots, Skills, and Symbols
Fri, November 7
11:00AM
GDC 6.032
Byron Wallace
School of Information, University of Texas at Austin
Automating evidence synthesis via machine learning and natural language processing

Friday, September 19, 2014, 11:00AM



GDC 6.302

Behavior-Grounded Multisensory Object Perception and Exploration by a Humanoid Robot

Jivko Sinapov   [homepage]

University of Texas at Austin

Infants use exploratory behaviors to learn about the objects around them. Psychologists have theorized that behaviors such as touching, pressing, lifting, and dropping enable infants to form grounded object representations. For example, scratching an object can provide information about its roughness, while lifting it can provide information about its weight. In a sense, the exploratory behavior acts as a ``question'' to the object, which is subsequently ``answered" by the sensory stimuli produced during the execution of the behavior. In contrast, most object representations used by robots today rely solely on computer vision or laser scan data, gathered through passive observation. Such disembodied approaches to robotic perception may be useful for recognizing an object using a 3D model database, but nevertheless, will fail to infer object properties that cannot be detected using vision alone.

To bridge this gap, my research has pursued a framework for object perception and exploration in which the robot's representation of objects is grounded in its own sensorimotor experience with them. In this framework, an object is represented by sensorimotor contingencies that span a diverse set of exploratory behaviors and sensory modalities. Results from several large-scale experimental studies show that the behavior-grounded object representation enables a robot to solve a wide variety of tasks including recognition of objects based on the stimuli that they produce, object grouping and sorting, and learning category labels that describe objects and their properties.

About the speaker:

Jivko Sinapov received the Ph.D. degree in computer science and human-computer interaction from Iowa State University (ISU) in 2013. While working towards his Ph.D. at ISU’s Developmental Robotics Lab, he developed novel methods for behavioral object exploration and multi-modal perception. He is currently a Postdoctoral Fellow working with Peter Stone at the Artificial Intelligence lab. His research interests include developmental robotics, computational perception, autonomous manipulation, and human-robot interaction.

Friday, October 3, 2014, 11:00AM



GDC 6.302

Robots, Skills, and Symbols

George Konidaris   [homepage]

Duke University

Robots are increasingly becoming a part of our daily lives, from the automated vacuum cleaners in our homes to the rovers exploring Mars. However, while recent years have seen dramatic progress in the development of affordable, general-purpose robot hardware, the capabilities of that hardware far exceed our ability to write software to adequately control.

The key challenge here is one of abstraction. Generally capable behavior requires high-level reasoning and planning, but perception and actuation must ultimately be performed using noisy, high-bandwidth, low-level sensors and effectors. My research uses methods from hierarchical reinforcement learning as a basis for constructing robot control hierarchies through the use of learned motor controllers, or skills.

The first part of my talk will present work on autonomous robot skill acquisition. I will demonstrate a robot system that learns to complete a task, and then extracts components of its solution as reusable skills, which it deploys to quickly solve a second task. The second part will briefly focus on practical methods for acquiring skill control policies, through the use human demonstration and active learning. Finally, I will present my recent work on establishing a link between the skills available to a robot and the abstract representations it should use to plan with them. I will discuss the implications of these results for building true action hierarchies for reinforcement learning problems.

About the speaker:

George Konidaris is an Assistant Professor of Computer Science and Electrical and Computer Engineering at Duke University. He holds a BScHons from the University of the Witwatersrand, an MSc from the University of Edinburgh, and a PhD from the University of Massachusetts Amherst, having completed his thesis under the supervision of Professor Andy Barto. Prior to joining Duke, he was a postdoctoral researcher at MIT with Professors Leslie Kaelbling and Tomas Lozano-Perez.

Friday, November 7, 2014, 11:00AM



GDC 6.032

Automating evidence synthesis via machine learning and natural language processing

Byron Wallace   [homepage]

School of Information, University of Texas at Austin

Evidence-based medicine (EBM) looks to inform patient care with the totality of available relevant evidence. Systematic reviews are the cornerstone of EBM and are critical to modern healthcare, informing everything from national health policy to bedside decision-making. But conducting systematic reviews is extremely laborious (and hence expensive): producing a single review requires thousands of person-hours. Moreover, the exponential expansion of the biomedical literature base has imposed an unprecedented burden on reviewers, thus multiplying costs. Researchers can no longer keep up with the primary literature, and this hinders the practice of evidence-based care.

To mitigate this issue, I will discuss past and recent advances in machine learning and natural language processing methods that look to optimize the practice of EBM. These include methods for semi-automating evidence identification (i.e., citation screening) and more recent work on automating the extraction of structured data from full-text published articles describing clinical trials. As I will discuss, these problems pose challenging problems from a machine learning vantage point, and hence motivate the development of novel approaches. I will present evaluations of these methods in the context of EBM.

About the speaker:

Byron Wallace is an assistant professor in the School of Information at the University of Texas at Austin. He holds a PhD in Computer Science from Tufts University, where he was advised by Carla Brodley. Prior to joining UT, he was research faculty at Brown University, where he was part of the Center for Evidence-Based Medicine and also affiliated with the Brown Laboratory for Linguistic Information Processing. His primary research is in applications of machine learning and natural language processing to problems in health -- particularly in evidence-based medicine.

Wallace's work is supported by grants from the NSF and the ARO. He was recognized as The Runner Up for the 2013 ACM Special Interest Group on Knowledge Discovery and Data Mining (SIG KDD) for his thesis work.



[ Past talks]

[ FAI Archives ]

Fall 2013 - Spring 2014

Fall 2012 - Spring 2013

Fall 2011 - Spring 2012

Fall 2010 - Spring 2011

Fall 2009 - Spring 2010

Fall 2008 - Spring 2009

Fall 2007 - Spring 2008

Fall 2006 - Spring 2007

Fall 2005 - Spring 2006

Spring 2005

Fall 2004

Spring 2004

Fall 2003

Spring 2003

Fall 2002

Spring 2002

Fall 2001

Spring 2001

Fall 2000

Spring 2000