Forum for Artificial Intelligence


Forum for Artificial Intelligence

[ About FAI   |   Upcoming talks   |   Past talks ]



The Forum for Artificial Intelligence meets every other week (or so) to discuss scientific, philosophical, and cultural issues in artificial intelligence. Both technical research topics and broader inter-disciplinary aspects of AI are covered, and all are welcome to attend!

If you would like to be added to the FAI mailing list, subscribe here. If you have any questions or comments, please send email to Karl Pichotta or Craig Corcoran.

The current schedule is also available as a Google Calendar or alternatively in iCal format.



[ Upcoming talks ]

Fri, January 23
11:00AM
GDC 6.302
Shimon Whiteson
Intelligent Autonomous Systems Group, Informatics Institute, University of Amsterdam
Relative Upper Confidence Bound for the K-Armed Dueling Bandit Problem

Friday, January 23, 2015, 11:00AM



GDC 6.302

Relative Upper Confidence Bound for the K-Armed Dueling Bandit Problem

Shimon Whiteson   [homepage]

Intelligent Autonomous Systems Group, Informatics Institute, University of Amsterdam

In this talk, I will propose a new method for the K-armed dueling bandit problem, a variation on the regular K-armed bandit problem that offers only relative feedback about pairs of arms. Our approach extends the Upper Confidence Bound algorithm to the relative setting by using estimates of the pairwise probabilities to select a promising arm and applying Upper Confidence Bound with the winner as a benchmark. We prove a sharp finite-time regret bound of order O(K log T) on a very general class of dueling bandit problems that matches a lower bound proven by Yue et al. In addition, our empirical results using real data from an information retrieval application show that it greatly outperforms the state of the art.

About the speaker:

I am an associate professor and VIDI laureate at the Informatics Institute at the University of Amsterdam. I lead the Autonomous Agents section of the Intelligent Autonomous Systems group. My research focuses on single- and multi-agent decision-theoretic planning and learning, including reinforcement learning, multi-agent planning, and stochastic optimization.

Current research efforts include multi-objective multi-agent planning, planning in decentralized partially observable Markov decision processes, and reinforcement learning for information retrieval. I am also the project leader and scientific coordinator for the TERESA Project.



[ Past talks]

Fri, September 19
11:00AM
GDC 6.302
Jivko Sinapov
University of Texas at Austin
Behavior-Grounded Multisensory Object Perception and Exploration by a Humanoid Robot
Fri, October 3
11:00AM
GDC 6.302
George Konidaris
Duke University
Robots, Skills, and Symbols
Fri, November 7
11:00AM
GDC 6.302
Byron Wallace
School of Information, University of Texas at Austin
Automating evidence synthesis via machine learning and natural language processing
Fri, November 21
11:00AM
GDC 6.302
Marc Levoy
Stanford University and Google
What Google Glass means for the future of photography

Friday, September 19, 2014, 11:00AM



GDC 6.302

Behavior-Grounded Multisensory Object Perception and Exploration by a Humanoid Robot

Jivko Sinapov   [homepage]

University of Texas at Austin

Infants use exploratory behaviors to learn about the objects around them. Psychologists have theorized that behaviors such as touching, pressing, lifting, and dropping enable infants to form grounded object representations. For example, scratching an object can provide information about its roughness, while lifting it can provide information about its weight. In a sense, the exploratory behavior acts as a ``question'' to the object, which is subsequently ``answered" by the sensory stimuli produced during the execution of the behavior. In contrast, most object representations used by robots today rely solely on computer vision or laser scan data, gathered through passive observation. Such disembodied approaches to robotic perception may be useful for recognizing an object using a 3D model database, but nevertheless, will fail to infer object properties that cannot be detected using vision alone.

To bridge this gap, my research has pursued a framework for object perception and exploration in which the robot's representation of objects is grounded in its own sensorimotor experience with them. In this framework, an object is represented by sensorimotor contingencies that span a diverse set of exploratory behaviors and sensory modalities. Results from several large-scale experimental studies show that the behavior-grounded object representation enables a robot to solve a wide variety of tasks including recognition of objects based on the stimuli that they produce, object grouping and sorting, and learning category labels that describe objects and their properties.

About the speaker:

Jivko Sinapov received the Ph.D. degree in computer science and human-computer interaction from Iowa State University (ISU) in 2013. While working towards his Ph.D. at ISU’s Developmental Robotics Lab, he developed novel methods for behavioral object exploration and multi-modal perception. He is currently a Postdoctoral Fellow working with Peter Stone at the Artificial Intelligence lab. His research interests include developmental robotics, computational perception, autonomous manipulation, and human-robot interaction.

Friday, October 3, 2014, 11:00AM



GDC 6.302

Robots, Skills, and Symbols

George Konidaris   [homepage]

Duke University

Robots are increasingly becoming a part of our daily lives, from the automated vacuum cleaners in our homes to the rovers exploring Mars. However, while recent years have seen dramatic progress in the development of affordable, general-purpose robot hardware, the capabilities of that hardware far exceed our ability to write software to adequately control.

The key challenge here is one of abstraction. Generally capable behavior requires high-level reasoning and planning, but perception and actuation must ultimately be performed using noisy, high-bandwidth, low-level sensors and effectors. My research uses methods from hierarchical reinforcement learning as a basis for constructing robot control hierarchies through the use of learned motor controllers, or skills.

The first part of my talk will present work on autonomous robot skill acquisition. I will demonstrate a robot system that learns to complete a task, and then extracts components of its solution as reusable skills, which it deploys to quickly solve a second task. The second part will briefly focus on practical methods for acquiring skill control policies, through the use human demonstration and active learning. Finally, I will present my recent work on establishing a link between the skills available to a robot and the abstract representations it should use to plan with them. I will discuss the implications of these results for building true action hierarchies for reinforcement learning problems.

About the speaker:

George Konidaris is an Assistant Professor of Computer Science and Electrical and Computer Engineering at Duke University. He holds a BScHons from the University of the Witwatersrand, an MSc from the University of Edinburgh, and a PhD from the University of Massachusetts Amherst, having completed his thesis under the supervision of Professor Andy Barto. Prior to joining Duke, he was a postdoctoral researcher at MIT with Professors Leslie Kaelbling and Tomas Lozano-Perez.

Friday, November 7, 2014, 11:00AM



GDC 6.302

Automating evidence synthesis via machine learning and natural language processing

Byron Wallace   [homepage]

School of Information, University of Texas at Austin

Evidence-based medicine (EBM) looks to inform patient care with the totality of available relevant evidence. Systematic reviews are the cornerstone of EBM and are critical to modern healthcare, informing everything from national health policy to bedside decision-making. But conducting systematic reviews is extremely laborious (and hence expensive): producing a single review requires thousands of person-hours. Moreover, the exponential expansion of the biomedical literature base has imposed an unprecedented burden on reviewers, thus multiplying costs. Researchers can no longer keep up with the primary literature, and this hinders the practice of evidence-based care.

To mitigate this issue, I will discuss past and recent advances in machine learning and natural language processing methods that look to optimize the practice of EBM. These include methods for semi-automating evidence identification (i.e., citation screening) and more recent work on automating the extraction of structured data from full-text published articles describing clinical trials. As I will discuss, these problems pose challenging problems from a machine learning vantage point, and hence motivate the development of novel approaches. I will present evaluations of these methods in the context of EBM.

About the speaker:

Byron Wallace is an assistant professor in the School of Information at the University of Texas at Austin. He holds a PhD in Computer Science from Tufts University, where he was advised by Carla Brodley. Prior to joining UT, he was research faculty at Brown University, where he was part of the Center for Evidence-Based Medicine and also affiliated with the Brown Laboratory for Linguistic Information Processing. His primary research is in applications of machine learning and natural language processing to problems in health -- particularly in evidence-based medicine.

Wallace's work is supported by grants from the NSF and the ARO. He was recognized as The Runner Up for the 2013 ACM Special Interest Group on Knowledge Discovery and Data Mining (SIG KDD) for his thesis work.

Friday, November 21, 2014, 11:00AM



GDC 6.302

What Google Glass means for the future of photography

Marc Levoy   [homepage]

Stanford University and Google

Although head-mounted cameras (and displays) are not new, Google Glass has the potential to make these devices commonplace. This has implications for the practice, art, and uses of photography. So what's different about doing photography with Glass? First, Glass doesn't work like a conventional camera; it's hands-free, point-of-view, always available, and instantly triggerable. Second, Glass facilitates different uses than a conventional camera: recording documents, making visual todo lists, logging your life, and swapping eyes with other Glass users. Third, Glass will be an open platform, unlike most cameras. This is not easy, because Glass is a heterogeneous computing platform, with multiple processors having different performance, efficiency, and programmability. The challenge is to invent software abstractions that allow control over the camera as well as access to these specialized processors. Finally, devices like Glass that are head-mounted and perform computational photography in real time have the potential to give wearers "superhero vision", like seeing in the dark, or magnifying subtle motion or changes. If such devices can also perform computer vision in real time and are connected to the cloud, then they can do face recognition, live language translation, and information recall. The hard part is not imagining these capabilities, but deciding which ones are feasible, useful, and socially acceptable.

About the speaker:

Marc Levoy is the VMware Founders Professor of Computer Science and Electrical Engineering, Emeritus, at Stanford University. He received a Bachelor's and Master's in Architecture from Cornell University in 1976 and 1978, and a PhD in Computer Science from the University of North Carolina at Chapel Hill in 1989. In the 1970's Levoy worked on computer animation, developing a cartoon animation system that was used by Hanna-Barbera Productions to make The Flintstones, Scooby Doo, and other shows. In the 1980's Levoy worked on volume rendering, a technique for displaying three-dimensional functions such as computed tomography (CT) or magnetic resonance (MR) data. In the 1990's he worked on 3D laser scanning, culminating in the Digital Michelangelo Project, in which he and his students spent a year in Italy digitizing the statues of Michelangelo. In the 2000's he worked on computational photography and microscopy, including light field imaging as commercialized by Lytro and other companies. At Stanford he taught computer graphics and the science of art, and still teaches digital photography. Outside of academia, Levoy co-designed the Google book scanner, launched Google's Street View project, and currently leads a team in GoogleX that has worked on Project Glass and the Nexus 6 HDR+ mode. Awards: Charles Goodwin Sands Medal for best undergraduate thesis (1976), National Science Foundation Presidential Young Investigator (1991), ACM SIGGRAPH Computer Graphics Achievement Award (1996), ACM Fellow (2007).

[ FAI Archives ]

Fall 2013 - Spring 2014

Fall 2012 - Spring 2013

Fall 2011 - Spring 2012

Fall 2010 - Spring 2011

Fall 2009 - Spring 2010

Fall 2008 - Spring 2009

Fall 2007 - Spring 2008

Fall 2006 - Spring 2007

Fall 2005 - Spring 2006

Spring 2005

Fall 2004

Spring 2004

Fall 2003

Spring 2003

Fall 2002

Spring 2002

Fall 2001

Spring 2001

Fall 2000

Spring 2000