Forum for Artificial Intelligence


Forum for Artificial Intelligence

[ About FAI   |   Upcoming talks   |   Past talks ]



The Forum for Artificial Intelligence meets every other week (or so) to discuss scientific, philosophical, and cultural issues in artificial intelligence. Both technical research topics and broader inter-disciplinary aspects of AI are covered, and all are welcome to attend!

If you would like to be added to the FAI mailing list, subscribe here. If you have any questions or comments, please send email to Karl Pichotta or Craig Corcoran.

The current schedule is also available as a Google Calendar or alternatively in iCal format.



[ Upcoming talks ]

Fri, January 30
12:00PM
UTA 5.522
Jeffrey Bigham
Carnegie Mellon University
Deeply Integrating Human and Machine Intelligence To Power Deployable Systems
Mon, February 2
11:00AM
GDC 4.516
Wei Xu
University of Pennsylvania
Modeling Lexically Divergent Paraphrases in Twitter (and Shakespeare!)
Fri, April 3
11:00AM
GDC 6.302
Noah Smith
Carnegie Mellon University

Friday, January 30, 2015, 12:00PM



UTA 5.522

Deeply Integrating Human and Machine Intelligence To Power Deployable Systems

Jeffrey Bigham   [homepage]

Carnegie Mellon University

Over the past few years, I have been developing and deploying interactive crowd-powered systems that solve characteristic ?hard? problems to help people get things done in their everyday lives. For instance, VizWiz answers visual questions for blind people in seconds, Legion drives robots in response to natural language commands, Chorus holds helpful general conversations with human partners, and Scribe converts streaming speech to text in less than five seconds.

My research envisions a future in which the intelligent systems that we have dreamed about for decades, which have inspired generations of computer scientists from its beginning, are brought about for the benefit of people. My work illustrates a path for achieving this vision by leveraging the on-demand labor of people to fill in for components that we cannot currently automate, by building frameworks that allow groups to do together what even expert individuals cannot do alone, and by gradually allowing machines to take over in a data-driven way. A crowd-powered world may seem counter to the goals of computer science, but I believe that it is precisely by creating and deploying the systems of our dreams that will learn how to advance computer science to create the machines that will someday realize them.

[Talk hosted by UT School of Information; Talk info here]

About the speaker:

Jeffrey P. Bigham is an Associate Professor in the Human-Computer Interaction and Language Technologies Institutes in the School of Computer Science at Carnegie Mellon University. He uses clever combinations of crowds and computation to build truly intelligent systems, often with a focus on systems supporting people with disabilities. Dr. Bigham received his B.S.E degree in Computer Science from Princeton University in 2003, and received his Ph.D. in Computer Science and Engineering from the University of Washington in 2009. From 2009 to 2013, he was an Assistant Professor at the University of Rochester, where he founded the ROC HCI human-computer interaction research group. He has been a Visiting Researcher at MIT CSAIL, Microsoft Research, and Google[x]. He has received a number of awards for his work, including the MIT Technology Review Top 35 Innovators Under 35 Award, the Alfred P. Sloan Fellowship, and the National Science Foundation CAREER Award.

Monday, February 2, 2015, 11:00AM



GDC 4.516

Modeling Lexically Divergent Paraphrases in Twitter (and Shakespeare!)

Wei Xu   [homepage]

University of Pennsylvania

Paraphrases are alternative linguistic expressions of the same meaning. Identifying paraphrases is fundamental to many natural language processing tasks and has been extensively studied for the standard contemporary English. In this talk I will present MULTIP (Multi-instance Learning Paraphrase Model), a joint word-sentence alignment model suited to identify paraphrases within the noisy user-generated texts on Twitter. The model infers latent word-level paraphrase anchors from only sentence-level annotations during learning. This is a major departure from previous approaches that rely on lexical or distributional similarities over sentence pairs. By reducing the dependence on word overlap as evidence of paraphrase, our approach identifies more lexically divergent expressions with equivalent meaning. For experiments, we constructed a Twitter Paraphrase Corpus using a novel and efficient crowdsourcing methodology. Our new approach improves the state-of-the-art performance of a method that combines a latent space model with a feature-based supervised classifier. I will also present findings on paraphrasing between standard English and Shakespearean styles.

Joint work with Chris Callison-Burch (UPenn), Bill Dolan (MSR), Alan Ritter (OSU), Yangfeng Ji (GaTech), Colin Cherry (NRC) and Ralph Grishman (NYU).

About the speaker:

Wei Xu is a postdoc in Computer and Information Science Department at University of Pennsylvania, working with Chris Callison-Burch. Her research focuses on paraphrases, social media and information extraction. She received her PhD in Computer Science from New York University. She is organizing the SemEval-2015 shared task on Paraphrase and Semantic Similarity in Twitter, and the Workshop on Noisy User-generated Text at ACL-2015 (http://noisy-text.github.io/). During her PhD, she visited University of Washington for two years and interned at Microsoft Research, ETS and Amazon.com.

Friday, April 3, 2015, 11:00AM



GDC 6.302

Noah Smith   [homepage]

Carnegie Mellon University



[ Past talks]

Fri, September 19
11:00AM
GDC 6.302
Jivko Sinapov
University of Texas at Austin
Behavior-Grounded Multisensory Object Perception and Exploration by a Humanoid Robot
Fri, October 3
11:00AM
GDC 6.302
George Konidaris
Duke University
Robots, Skills, and Symbols
Fri, November 7
11:00AM
GDC 6.302
Byron Wallace
School of Information, University of Texas at Austin
Automating evidence synthesis via machine learning and natural language processing
Fri, November 21
11:00AM
GDC 6.302
Marc Levoy
Stanford University and Google
What Google Glass means for the future of photography
Fri, January 23
11:00AM
GDC 6.302
Shimon Whiteson
University of Amsterdam
Relative Upper Confidence Bound for the K-Armed Dueling Bandit Problem
Fri, January 23
2:00PM
GDC 6.302
Rina Dechter
University of California, Irvine
Modern Exact and Approximate Combinatorial Optimization Algorithms for Graphical Models

Friday, September 19, 2014, 11:00AM



GDC 6.302

Behavior-Grounded Multisensory Object Perception and Exploration by a Humanoid Robot

Jivko Sinapov   [homepage]

University of Texas at Austin

Infants use exploratory behaviors to learn about the objects around them. Psychologists have theorized that behaviors such as touching, pressing, lifting, and dropping enable infants to form grounded object representations. For example, scratching an object can provide information about its roughness, while lifting it can provide information about its weight. In a sense, the exploratory behavior acts as a ``question'' to the object, which is subsequently ``answered" by the sensory stimuli produced during the execution of the behavior. In contrast, most object representations used by robots today rely solely on computer vision or laser scan data, gathered through passive observation. Such disembodied approaches to robotic perception may be useful for recognizing an object using a 3D model database, but nevertheless, will fail to infer object properties that cannot be detected using vision alone.

To bridge this gap, my research has pursued a framework for object perception and exploration in which the robot's representation of objects is grounded in its own sensorimotor experience with them. In this framework, an object is represented by sensorimotor contingencies that span a diverse set of exploratory behaviors and sensory modalities. Results from several large-scale experimental studies show that the behavior-grounded object representation enables a robot to solve a wide variety of tasks including recognition of objects based on the stimuli that they produce, object grouping and sorting, and learning category labels that describe objects and their properties.

About the speaker:

Jivko Sinapov received the Ph.D. degree in computer science and human-computer interaction from Iowa State University (ISU) in 2013. While working towards his Ph.D. at ISU’s Developmental Robotics Lab, he developed novel methods for behavioral object exploration and multi-modal perception. He is currently a Postdoctoral Fellow working with Peter Stone at the Artificial Intelligence lab. His research interests include developmental robotics, computational perception, autonomous manipulation, and human-robot interaction.

Friday, October 3, 2014, 11:00AM



GDC 6.302

Robots, Skills, and Symbols

George Konidaris   [homepage]

Duke University

Robots are increasingly becoming a part of our daily lives, from the automated vacuum cleaners in our homes to the rovers exploring Mars. However, while recent years have seen dramatic progress in the development of affordable, general-purpose robot hardware, the capabilities of that hardware far exceed our ability to write software to adequately control.

The key challenge here is one of abstraction. Generally capable behavior requires high-level reasoning and planning, but perception and actuation must ultimately be performed using noisy, high-bandwidth, low-level sensors and effectors. My research uses methods from hierarchical reinforcement learning as a basis for constructing robot control hierarchies through the use of learned motor controllers, or skills.

The first part of my talk will present work on autonomous robot skill acquisition. I will demonstrate a robot system that learns to complete a task, and then extracts components of its solution as reusable skills, which it deploys to quickly solve a second task. The second part will briefly focus on practical methods for acquiring skill control policies, through the use human demonstration and active learning. Finally, I will present my recent work on establishing a link between the skills available to a robot and the abstract representations it should use to plan with them. I will discuss the implications of these results for building true action hierarchies for reinforcement learning problems.

About the speaker:

George Konidaris is an Assistant Professor of Computer Science and Electrical and Computer Engineering at Duke University. He holds a BScHons from the University of the Witwatersrand, an MSc from the University of Edinburgh, and a PhD from the University of Massachusetts Amherst, having completed his thesis under the supervision of Professor Andy Barto. Prior to joining Duke, he was a postdoctoral researcher at MIT with Professors Leslie Kaelbling and Tomas Lozano-Perez.

Friday, November 7, 2014, 11:00AM



GDC 6.302

Automating evidence synthesis via machine learning and natural language processing

Byron Wallace   [homepage]

School of Information, University of Texas at Austin

Evidence-based medicine (EBM) looks to inform patient care with the totality of available relevant evidence. Systematic reviews are the cornerstone of EBM and are critical to modern healthcare, informing everything from national health policy to bedside decision-making. But conducting systematic reviews is extremely laborious (and hence expensive): producing a single review requires thousands of person-hours. Moreover, the exponential expansion of the biomedical literature base has imposed an unprecedented burden on reviewers, thus multiplying costs. Researchers can no longer keep up with the primary literature, and this hinders the practice of evidence-based care.

To mitigate this issue, I will discuss past and recent advances in machine learning and natural language processing methods that look to optimize the practice of EBM. These include methods for semi-automating evidence identification (i.e., citation screening) and more recent work on automating the extraction of structured data from full-text published articles describing clinical trials. As I will discuss, these problems pose challenging problems from a machine learning vantage point, and hence motivate the development of novel approaches. I will present evaluations of these methods in the context of EBM.

About the speaker:

Byron Wallace is an assistant professor in the School of Information at the University of Texas at Austin. He holds a PhD in Computer Science from Tufts University, where he was advised by Carla Brodley. Prior to joining UT, he was research faculty at Brown University, where he was part of the Center for Evidence-Based Medicine and also affiliated with the Brown Laboratory for Linguistic Information Processing. His primary research is in applications of machine learning and natural language processing to problems in health -- particularly in evidence-based medicine.

Wallace's work is supported by grants from the NSF and the ARO. He was recognized as The Runner Up for the 2013 ACM Special Interest Group on Knowledge Discovery and Data Mining (SIG KDD) for his thesis work.

Friday, November 21, 2014, 11:00AM



GDC 6.302

What Google Glass means for the future of photography

Marc Levoy   [homepage]

Stanford University and Google

Although head-mounted cameras (and displays) are not new, Google Glass has the potential to make these devices commonplace. This has implications for the practice, art, and uses of photography. So what's different about doing photography with Glass? First, Glass doesn't work like a conventional camera; it's hands-free, point-of-view, always available, and instantly triggerable. Second, Glass facilitates different uses than a conventional camera: recording documents, making visual todo lists, logging your life, and swapping eyes with other Glass users. Third, Glass will be an open platform, unlike most cameras. This is not easy, because Glass is a heterogeneous computing platform, with multiple processors having different performance, efficiency, and programmability. The challenge is to invent software abstractions that allow control over the camera as well as access to these specialized processors. Finally, devices like Glass that are head-mounted and perform computational photography in real time have the potential to give wearers "superhero vision", like seeing in the dark, or magnifying subtle motion or changes. If such devices can also perform computer vision in real time and are connected to the cloud, then they can do face recognition, live language translation, and information recall. The hard part is not imagining these capabilities, but deciding which ones are feasible, useful, and socially acceptable.

About the speaker:

Marc Levoy is the VMware Founders Professor of Computer Science and Electrical Engineering, Emeritus, at Stanford University. He received a Bachelor's and Master's in Architecture from Cornell University in 1976 and 1978, and a PhD in Computer Science from the University of North Carolina at Chapel Hill in 1989. In the 1970's Levoy worked on computer animation, developing a cartoon animation system that was used by Hanna-Barbera Productions to make The Flintstones, Scooby Doo, and other shows. In the 1980's Levoy worked on volume rendering, a technique for displaying three-dimensional functions such as computed tomography (CT) or magnetic resonance (MR) data. In the 1990's he worked on 3D laser scanning, culminating in the Digital Michelangelo Project, in which he and his students spent a year in Italy digitizing the statues of Michelangelo. In the 2000's he worked on computational photography and microscopy, including light field imaging as commercialized by Lytro and other companies. At Stanford he taught computer graphics and the science of art, and still teaches digital photography. Outside of academia, Levoy co-designed the Google book scanner, launched Google's Street View project, and currently leads a team in GoogleX that has worked on Project Glass and the Nexus 6 HDR+ mode. Awards: Charles Goodwin Sands Medal for best undergraduate thesis (1976), National Science Foundation Presidential Young Investigator (1991), ACM SIGGRAPH Computer Graphics Achievement Award (1996), ACM Fellow (2007).

Friday, January 23, 2015, 11:00AM



GDC 6.302

Relative Upper Confidence Bound for the K-Armed Dueling Bandit Problem

Shimon Whiteson   [homepage]

University of Amsterdam

In this talk, I will propose a new method for the K-armed dueling bandit problem, a variation on the regular K-armed bandit problem that offers only relative feedback about pairs of arms. Our approach extends the Upper Confidence Bound algorithm to the relative setting by using estimates of the pairwise probabilities to select a promising arm and applying Upper Confidence Bound with the winner as a benchmark. We prove a sharp finite-time regret bound of order O(K log T) on a very general class of dueling bandit problems that matches a lower bound proven by Yue et al. In addition, our empirical results using real data from an information retrieval application show that it greatly outperforms the state of the art.

About the speaker:

Shimon Whiteson is an Associate Professor and the head of the Autonomous Agents section at the Informatics Institute of the University of Amsterdam. He was recently awarded an ERC Starting Grant from the European Research Council as well as a VIDI grant for mid-career researchers from the Dutch national science foundation. He received his PhD in 2007 from the University of Texas at Austin, under the supervision of Peter Stone. His research focuses on decision-theoretic planning and learning, with applications to robotics, multi-camera tracking systems, and information retrieval.

Friday, January 23, 2015, 2:00PM



GDC 6.302

Modern Exact and Approximate Combinatorial Optimization Algorithms for Graphical Models

Rina Dechter   [homepage]

University of California, Irvine

In this talk I will present several principles behind state of the art algorithms for solving combinatorial optimization tasks defined over graphical models (Bayesian networks, Markov networks, constraint networks, satisfiability) and demonstrate their performance on some benchmarks.

Specifically I will present branch and bound search algorithms which explore the AND/OR search space over graphical models and thus exploit problem's decomposition (using AND nodes), equivalence (by caching) and pruning irrelevant subspaces via the power of bounding heuristics. In particular I will show how the two ideas of mini-bucket partitioning which relaxes the input problem using node duplication only, combined with linear programming relaxations ideas which optimize cost-shifting/re-parameterization schemes, can yield tight bounding heuristic information within systematic, anytime, search.

Notably, a solver for finding the most probable explanation (MPE or MAP), embedding these principles has won first place in all time categories in the 2012 PASCAL2 approximate inference challenge, and first or second place in the UAI-2014 competitions. Recent work on parallel/distributed schemes and on m-best anytime solutions may be mentioned, as time permits.

Parts of this work were done jointly with: Lars Otten, Alex Ihler, Radu Marinescu, Natasha Flerova, Kalev Kask

About the speaker:

Rina Dechter is a professor of Computer Science at the University of California, Irvine. She received her PhD in Computer Science at UCLA in 1985, an MS degree in Applied Mathematic from the Weizmann Institute and a B.S in Mathematics and Statistics from the Hebrew University, Jerusalem. Her research centers on computational aspects of automated reasoning and knowledge representation including search, constraint processing and probabilistic reasoning. Professor Dechter is an author of Constraint Processing published by Morgan Kaufmann, 2003, has authored over 150 research papers, and has served on the editorial boards of: Artificial Intelligence, the Constraint Journal, Journal of Artificial Intelligence Research and journal of Machine Learning (JLMR). She was awarded the Presidential Young investigator award in 1991, is a fellow of the American association of Artificial Intelligence since 1994, was a Radcliffe Fellowship 2005-2006, received the 2007 Association of Constraint Programming (ACP) research excellence award and is a 2013 Fellow of the ACM. She has been Co-Editor-in-Chief of Artificial Intelligence, since 2011.

[ FAI Archives ]

Fall 2013 - Spring 2014

Fall 2012 - Spring 2013

Fall 2011 - Spring 2012

Fall 2010 - Spring 2011

Fall 2009 - Spring 2010

Fall 2008 - Spring 2009

Fall 2007 - Spring 2008

Fall 2006 - Spring 2007

Fall 2005 - Spring 2006

Spring 2005

Fall 2004

Spring 2004

Fall 2003

Spring 2003

Fall 2002

Spring 2002

Fall 2001

Spring 2001

Fall 2000

Spring 2000