Forum for Artificial Intelligence


Forum for Artificial Intelligence

[ About FAI   |   Upcoming talks   |   Past talks ]



The Forum for Artificial Intelligence meets every other week (or so) to discuss scientific, philosophical, and cultural issues in artificial intelligence. Both technical research topics and broader inter-disciplinary aspects of AI are covered, and all are welcome to attend!

If you would like to be added to the FAI mailing list, subscribe here. If you have any questions or comments, please send email to Karl Pichotta or Craig Corcoran.

The current schedule is also available as a Google Calendar or alternatively in iCal format.



[ Upcoming talks ]



[ Past talks]

Mon, August 23
11:00AM
ACES 2.402
Pat Langley
Arizona State University
Combining Data-Intensive and Knowledge-Guided Methods to Discover Interpretable Scientific Models
Fri, September 10
11:00AM
ACES 2.302
Luke Zettlemoyer
University of Washington
Learning to Follow Orders: Reinforcement Learning for Mapping Instructions to Actions
Thu, September 30
2:00PM
ACES 2.402
Philip Resnik
University of Maryland, College Park
Translation as Collaboration
Fri, October 8
11:00AM
ACES 2.402
Ted Pedersen
University of Minnesota in Duluth
The Effect of Different Context Representations on Word Sense Discrimination in Biomedical Texts
Thu, October 21
1:00PM
ACES 2.402
L. Venkata Subramaniam
IBM Research India
Real World Text Analytics
Fri, October 22
11:00AM
ACES 2.402
Andrew McCallum
University of Massachusetts Amherst
Probabilistic Programming and Probabilistic Databases with Imperatively-defined Factor Graphs
Fri, November 5
11:00AM
PAI 3.14
Michael Bowling
University of Alberta
AI After Dark: Computers Playing Poker
Wed, November 10
11:00AM
ACES 2.402
Csaba Szepesvari
University of Alberta
How to choose cakes (if you must?) -- advice from statistics
Fri, November 19
11:00AM
PAI 3.14
Gopal Gupta
UT Dallas
Logic, Co-induction and Infinite Computations
Fri, December 3
11:00AM
ACES 2.302
Vittorio Ferrari
ETH Zurich
Visual Learning with Generic Knowledge
Fri, January 21
11:00AM
ACES 2.302
Edwin Olson
University of Michigan
Winning the MAGIC 2010 Autonomous Robotics Competition
Fri, January 28
2:00PM
ACES 2.402
Marc Deisenroth
University of Washington
Probabilistic Inference and Learning for Control
Fri, February 11
11:00AM
ACES 2.402
Serge Belongie
University of California, San Diego
Visual Recognition with Humans in the Loop
Fri, February 11
2:00PM
ACES 2.402
Bonnie Webber
University of Edinburgh
Discourse Modelling -- Past, Present and Future
Mon, February 14
4:00PM
ACES 2.302
James Fan (2/14)
IBM Research
Viewing Party for the Jeopardy! Challenge
Tue, February 15
4:20PM
ACES 2.402
James Fan (2/15)
IBM Research
Viewing Party for the Jeopardy! Challenge
Wed, February 16
4:20PM
ACES 2.302
James Fan (2/16)
IBM Research
Viewing Party for the Jeopardy! Challenge
Tue, February 22
11:00AM
ACES 2.402
Rob Holte
University of Alberta
Improving Predictions of IDA*'s Performance by Ignoring Information
Fri, March 4
11:00AM
ACES 2.402
Ellen Riloff
University of Utah
Adventures in Bootstrapping: Acquiring Lexical Knowledge for NLP
Fri, March 11
11:00AM
ACES 2.402
James Rehg
Georgia Institute of Technology
Temporal Causality and the Analysis of Interactions in Video
Fri, March 25
11:00AM
ACES 2.402
Chitta Baral
Arizona State University
Translating English to KR languages using inverse lambda and parameter learning
Mon, March 28
1:00PM
ACES 2.402
Paul N. Bennett
Microsoft Research
Class-Based Contextualized Search
Fri, April 1
11:00AM
ACES 2.402
Michal Pechoucek
Czech Technical University in Prague
Towards scalable, high-fidelity and mixed multi-agent simulation of manned/unamnned air traffic
Fri, April 8
11:00AM
ACES 2.402
Charles Isbell
Georgia Tech
Adaptive Drama Management: Bringing Machine Learning to Interactive Entertainment
Fri, April 15
2:00PM
JGB 2.218 (Note unusual time and place)
Rob Fergus
New York University
Deconvolutional Networks
Fri, April 29
11:00AM
ACES 2.402
K. Brent Venable
University of Padova (Italy)
Compact preference models in single- and multi-agent settings
Tue, May 3
11:00AM
ACES 2.402
Pedro Domingos
University of Washington
Unifying Logic and Probability: A Progress Report

Monday, August 23, 2010, 11:00AM



ACES 2.402

Combining Data-Intensive and Knowledge-Guided Methods to Discover Interpretable Scientific Models

Pat Langley   [homepage]

Arizona State University

Early research in e-science emphasized the use of computers to represent and simulate models that reflected scientists' knowledge about situations of interest, but these models often made little contact with data. Recent work in e-science has utilized machine learning and data mining to uncover regularities in observations, but the results make few connections to scientists' knowledge. In this talk, I present an approach known as inductive process modeling that combines these two traditions in a synergistic way. The paradigm encodes scientific models as sets of processes that incorporate differential equations, simulates these models' behavior over time, induces the models from time-series data, and uses background knowledge to constrain search through the model space. The resulting models are interpretable, in that they use concepts and notations familiar to scientists, but they are also accurate, in that they match observations. Although inductive process modeling is a general approach to scientific knowledge discovery, I illustrate its operation in the context of ecology and environmental science, fields for which it seems especially appropriate. After describing the basic technique, I report a number of extensions that increase the accuracy of induced models and efficiency at finding them. I also describe progress on an interactive software environment for the construction, evaluation, and revision of such interpretable scientific models. In closing, I discuss intellectual influences on the work and directions for future research.

This talk describes joint work at Stanford University and ISLE with Kevin Arrigo, Stuart Borrett, Matt Bravo, Will Bridewell, and Ljupco Todorovski under funding from the National Science Foundation. The URL http://www.isle.org/process/ includes a list of publications on inductive process modeling.

About the speaker:

Dr. Pat Langley serves as Director of the Institute for the Study of Learning and Expertise, Consulting Professor of Symbolic Systems at Stanford University, and Head of the Computational Learning Laboratory at Stanford's Center for the Study of Language and Information. He has contributed to the fields of artificial intelligence and cognitive science for over 25 years, having published 200 papers and five books on these topics, including the text Elements of Machine Learning. Professor Langley is considered a co-founder of the field of machine learning, where he championed both experimental studies of learning algorithms and their application to real-world problems before either were popular and before the phrase `data mining' became widespread.

Dr. Langley is a AAAI Fellow, he was founding Executive Editor of the journal Machine Learning, and he was Program Chair for the Seventeenth International Conference on Machine Learning. His research has dealt with learning in planning, reasoning, language, vision, robotics, and scientific knowledge discovery, and he has contributed novel methods to a variety of paradigms, including logical, probabilistic, and case-based learning. His current research focuses on methods for constructing explanatory process models in scientific domains and on integrated architectures for intelligent physical agents.

Friday, September 10, 2010, 11:00AM



ACES 2.302

Learning to Follow Orders: Reinforcement Learning for Mapping Instructions to Actions

Luke Zettlemoyer   [homepage]

University of Washington

In this talk, I will address the problem of relating linguistic analysis and control --- specifically, mapping natural language instructions to executable actions. I will present a reinforcement learning algorithm for inducing these mappings by interacting with virtual computer environments and observing the outcome of the executed actions. This technique has enabled automation of tasks that until now have required human participation --- for example, automatically configuring software by consulting how-to guides. I will also describe a recent extension for learning to interpret high-level instructions, ones that posit goals without explicitly describing the actions required to achieve them. Our results demonstrate that in both cases, the method can rival supervised learning techniques while requiring few or no annotated training examples.

This is joint work with Branavan, Harr Chen and Regina Barzilay. The talk will focus on work published at ACL 2009, where it received a Best Paper Award, and ACL 2010.

About the speaker:

Luke Zettlemoyer is an Assistant Professor at the University of Washington in Seattle. He recent completed a postdoctoral research fellowship at the University of Edinburgh and, before that, received his Ph.D. from MIT. His research interests are in the intersections of natural language processing, machine learning and decision making under uncertainty.

Thursday, September 30, 2010, 2:00PM



ACES 2.402

Translation as Collaboration

Philip Resnik   [homepage]

University of Maryland, College Park

Although machine translation has made a great deal of recent progress, fully automatic high quality translation remains far out of reach for the vast majority of the s languages. A variety of projects are now using crowdsourcing to tap into Web-based communities of people who are willing to help in the translation process, but bilingual expertise is quite sparse compared to the availability of monolingual volunteers. In this talk, I'll discuss a new approach to the problem: taking advantage of monolingual human expertise in tandem with automatic translation. Early empirical results suggest that this collaborative approach, combining monolingual crowdsourcing with MT, may cover significant new ground on the path toward high availability, high quality, cost-effective translation.world. This is joint work with Ben Bederson, Olivia Buzek, Chang Hu, Yakov Kronrod, and Alex Quinn.

About the speaker:

Philip Resnik is a professor at the University of Maryland, with joint appointments in the Department of Linguistics and at the Institute for Advanced Computer Studies. He received his Ph.D. in Computer and Information Science at the University of Pennsylvania in 1993, and has held research positions at Bolt Beranek and Newman, IBM TJ Watson Research Center, and Sun Microsystems Laboratories. His research interests include the combination of knowledge-based and statistical methods in NLP, machine translation, and computational social science.

Friday, October 8, 2010, 11:00AM



ACES 2.402

The Effect of Different Context Representations on Word Sense Discrimination in Biomedical Texts

Ted Pedersen   [homepage]

University of Minnesota in Duluth

Unsupervised word sense discrimination relies on the idea that words that occur in similar contexts will have similar meanings. These techniques cluster multiple contexts in which an ambiguous word occurs, and the number of clusters discovered indicates the number of senses in which the ambiguous word is used. One important distinction among these methods is the underlying means of representing the contexts to be clustered. In this talk I will compare the efficacy of first--order methods that directly represent the features that occur in a context with several second--order methods that use a more indirect representation. I will show that second order methods that use word by word co--occurrence matrices result in the highest accuracy and most robust word sense discrimination. These experiments were conducted with the freely available open--source software package SenseClusters, using experimental data drawn from MedLine abstracts. I will also briefly introduce UMLS::Similarity, a freely available open-source software package that measures the similarity and relatedness of concepts found in the Unified Medical Language System (UMLS). I will show how measures from this package can be used to predict the degree of difficulty in word sense discrimination experiments, and can be used to perform word sense disambiguation.

About the speaker:

Ted Pedersen (Ph.D., Southern Methodist University, 1998) is a Professor in the Department of Computer Science at the University of Minnesota, Duluth. His research interests revolve around determining the meaning of words and phrases in context, and include word sense discrimination and disambiguation, measuring semantic similarity and relatedness among concepts, and identifying collocations in large corpora. His research is currently funded by the National Institutes of Health, and focuses on applying some of these ideas to the biomedical domain.

Dr. Pedersen has overseen a number of successful efforts to develop and deploy open--source software, including WordNet::Similarity, UMLS::Similarity, the Ngram Statistics Package, and SenseClusters. He currently serves as an elected member of the Executive Board of the North American Chapter of the Association for Computational Linguistics (NAACL) (2008-2010). He is the recipient of a National Science Foundation Faculty Early Career Development (CAREER) Award.

Thursday, October 21, 2010, 1:00PM



ACES 2.402

Real World Text Analytics

L. Venkata Subramaniam   [homepage]

IBM Research India

Often, in the real world noise is ubiquitous in text documents. Text produced by processing signals intended for human use are often noisy for automated computer processing. Techniques like Automatic Speech Recognition, Optical Character Recognition and Machine Translation introduce processing noise. Similarly, digital text produced in informal settings such as online chat, SMS, emails, message boards, newsgroups, blogs, wikis and web pages contain considerable noise. In this talk we will present our work in dealing with real world noisy text and extracting useful information from it.

About the speaker:

L. Venkata Subramaniam manages the information processing and analytics group at IBM Research India. He received his PhD from IIT Delhi in 1999. His research focuses on unstructured information management, statistical natural language processing, noisy text analytics, text and data mining, information theory, speech and image processing. His work on Data Cleansing and Entity Resolution has been deployed on the field in scenarios involving cleansing of millions of data records. He co founded the AND (Analytics for Noisy Unstructured Text Data) workshop series and also co-chaired the first four workshops, 2007-2010. He was guest co-editor of two special issues on Noisy Text Analytics in the International Journal of Document Analysis and Recognition in 2007 and 2009.

Friday, October 22, 2010, 11:00AM



ACES 2.402

Probabilistic Programming and Probabilistic Databases with Imperatively-defined Factor Graphs

Andrew McCallum   [homepage]

University of Massachusetts Amherst

Practitioners in natural language processing, information integration, computer vision and other areas have achieved great empirical success using graphical models with repeated, relational structure. But as researchers explore increasingly complex structures, there has been growing interest in new programming languages or toolkits that make it easier to implement such models in a flexible, yet scalable way. A key issue in these toolkits is how to define the templates of these repeated structure and tied parameters. Rather than using a declarative language, such as SQL or first-order logic, we advocate using an imperative language to express various aspects of model structure, inference, and learning. By combining the traditional, declarative statistical semantics of factor graphs with imperative definitions of their construction and operation, we allow the user to mix declarative and procedural domain knowledge, and also gain significant efficiencies. We have implemented such imperatively defined factor graphs in a system we call FACTORIE, a software library for an object-oriented, strongly-typed, functional language called Scala. I will introduce FACTORIE, give several examples of its use, explain how it supports a new style of probabilistic databases, and describe its application to schema alignment and lightly-supervised extraction of FreeBase-defined relations from several years' worth of NYTimes articles.

About the speaker:

Andrew McCallum is a Professor and Director of the Information Extraction and Synthesis Laboratory in the Computer Science Department at University of Massachusetts Amherst. He has published over 150 papers in many areas of AI, including natural language processing, machine learning, data mining and reinforcement learning, and his work has received over 15,000 citations. He obtained his PhD from University of Rochester in 1995 with Dana Ballard and a postdoctoral fellowship from CMU with Tom Mitchell and Sebastian Thrun. In the early 2000's he was Vice President of Research and Development at at WhizBang Labs, a 170-person start-up company that used machine learning for information extraction from the Web. He is a AAAI Fellow, the recipient of the UMass NSM Distinguished Research Award, the UMass Lilly Teaching Fellowship, and the IBM Faculty Partnership Award. He was the Program Co-chair for the International Conference on Machine Learning (ICML) 2008, and a member of the board of the International Machine Learning Society and the editorial board of the Journal of Machine Learning Research. For the past ten years, McCallum has been active in research on statistical machine learning applied to text, especially information extraction, co-reference, document classification, finite state models, semi-supervised learning, and social network analysis. Work on search and bibliometric analysis of open-access research literature can be found at rexa.info. McCallum's web page is www.cs.umass.edu/~mccallum.

Friday, November 5, 2010, 11:00AM



PAI 3.14

AI After Dark: Computers Playing Poker

Michael Bowling   [homepage]

University of Alberta

The game of poker presents a serious challenge for artificial intelligence. The game is essentially about dealing with many forms of uncertainty: unobservable opponent cards, undetermined future cards, and unknown opponent strategies. Coping with these uncertainties is critical to playing at a high-level. In July 2008, the University of Alberta's poker playing program, Polaris, became the first to defeat top professional players at any variant of poker in a meaningful competition. In this talk, I'll tell the story of this match interleaved with the science that enabled Polaris's accomplishment.

About the speaker:

Michael Bowling is an associate professor at the University of Alberta. He received his Ph.D. in 2003 from Carnegie Mellon University in the area of artificial intelligence. His research focuses on machine learning, game theory, and robotics, and he is particularly fascinated by the problem of how computers can learn to play games through experience.

Wednesday, November 10, 2010, 11:00AM



ACES 2.402

How to choose cakes (if you must?) -- advice from statistics

Csaba Szepesvari   [homepage]

University of Alberta

Every time you visit a new town you go to its best confectionery. Which cake to choose? Needless to say cakes are made a little differently in every town. Should you choose the familiar favorite of yours or should you try a new one so that you are not missing something very good? How to choose if there are a very large number of cakes, maybe more than days in your life? Or even infinite? Of course, this problem is an instance of the classic multi-armed bandit problem.

Applications range from project management, pricing products, through calibration of physical processes, monitoring and control of wireless networks, to optimizing website content. In this talk I will describe some recent results about when the space of options is very large or even infinite with more or less structure. I will outline several open problems with varying difficulty. The talk is based on joint work with (in chronological order) Peter Auer, Ronald Ortner (Graz, Austria), Yasin Abbasi-Yadkori (UofA), Sarah Filippi, Olivier Cappe, Aurilien Garivier (Telecom ParisTech, CNRS), and Pallavi Arora and Rong Zheng (University of Houston, TX).

About the speaker:

Csaba Szepesvari received his PhD in 1999 from University, Szeged, Hungary. He is currently an Associate Professor at the Department of Computing Science of the University of Alberta and a principal investigator of the Alberta Ingenuity Center for Machine Learning. Previously, he held a senior researcher position at the Computer and Automation Research Institute of the Hungarian Academy of Sciences, where he headed the Machine Learning Group. Before that, he spent 5 years in the software industry. In 1998, he became the Research Director of Mindmaker, Ltd., working on natural language processing and speech products, while from 2000, he became the Vice President of Research at the Silicon Valley company Mindmaker Inc. He is the coauthor of a book on nonlinear approximate adaptive controllers, a recent short book on Reinforcement Learning, published over 100 journal and conference papers. He serves as the Associate Editor of IEEE Transactions on Adaptive Control and AI Communications, is on the board of editors of the Journal of Machine Learning Research and the Machine Learning Journal, and is a regular member of the program committee at various machine learning and AI conferences. His areas of expertise include statistical learning theory, reinforcement learning and nonlinear adaptive control.

Friday, November 19, 2010, 11:00AM



PAI 3.14

Logic, Co-induction and Infinite Computations

Gopal Gupta   [homepage]

UT Dallas

Coinduction is a powerful technique for reasoning about unfounded sets, unbounded structures, infinite automata, and interactive computations. Where induction corresponds to least fixed points semantics, co-induction corresponds to greatest fixed point semantics. In this talk I will give a tutorial introduction to co-induction and show how co-induction can be elegantly incorporated into logic programming to obtain the co-inductive logic programming (co-LP) paradigm. I will also discuss how co-LP can be elegantly used for sophisticated applications that include (i) model checking and verification, including of hybrid/cyber-physical systems and systems specified using the pi-calculus (ii) planning and goal-directed execution of answer set programs that perform non-monotonic reasoning.

About the speaker:

Gopal Gupta received his MS and Ph.D. in computer science from the University of North Carolina at Chapel Hill in 1987 and 1991 respectively, and his B. Tech. in Computer Science from IIT Kanpur in 1985. Currently he is a Professor and Head of Computer Science at the University of Texas at Dallas. His research interests are in logic programming, programming languages semantics/implementation, and assistive technology. He has published extensively in these areas. He serves as an area editor of the journal Theory and Practice of Logic Programming. He has served in the program committees of numerous conferences, and since January 2010, has served as the President of the Association for Logic Programming. His work on logic programming has been the basis of two startup companies.

Friday, December 3, 2010, 11:00AM



ACES 2.302

Visual Learning with Generic Knowledge

Vittorio Ferrari   [homepage]

ETH Zurich

The dream of computer vision a machine capable of interpreting images of complex scenes. Central to this goal is the ability to recognize objects as belonging to classes and to localize them in the images. In the traditional paradigm, each class is learned starting from scratch, typically from training images where the location of objects is manually annotated (fully supervised setting). In this talk, I will present a scenario where knowledge generic over classes is first learned from images of various classes with given object locations and then employed to support learning any new class without location annotation. Generic knowledge provides a strong basis which facilitates learning in this weakly supervised setting. This strategy enables learning from challenging images containing extensive clutter and large scale and appearance variations between object instances. In turn, this opens the door to learning a large number of classes with little manual labelling effort.

About the speaker:

Vittorio Ferrari is an Assistant Professor at the Swiss Federal Institute of Technology Zurich (ETHZ). After receiving his PhD from ETHZ in 2004, he was a post-doctoral researcher at INRIA Grenoble and the University of Oxford. His research interests are in visual learning, with emphasis on reducing the amount of manual effort needed to learn visual concepts.

Friday, January 21, 2011, 11:00AM



ACES 2.302

Winning the MAGIC 2010 Autonomous Robotics Competition

Edwin Olson   [homepage]

University of Michigan

The MAGIC 2010 competition asked teams of robots to collaboratively perform reconnaissance missions in a 250,000 m2 urban indoor/outdoor environment: explore the area, build a map, and recognize interesting objects -- with as little human intervention as possible. In this talk, I'll describe how our team of 14 robots won the competition and $750,000. Central challenges included inaccurate dead-reckoning information, limited communications, moving and lethal obstacles, and a scale of operation (environment size and number of robots) that pushed existing mapping algorithms to failure. Key to our systems' success was our ability to maintain a consistent coordinate frame and the ability to elicit critically useful information from the human operators while simultaneously minimizing their workload. I'll describe the components of our system in addition to illustrating our team's performance, including some of the failure modes that motivate our ongoing work.

About the speaker:

Edwin Olson is an assistant professor at the University of Michigan with research interests in robot autonomy, perception, and learning. In 2010, he led Team Michigan to first place in the MAGIC 2010 robotics competition. He received his PhD, M.Eng., and B.S. from MIT, where he was also a core member of MIT's DARPA Urban Challenge team.

Friday, January 28, 2011, 2:00PM



ACES 2.402

Probabilistic Inference and Learning for Control

Marc Deisenroth   [homepage]

University of Washington

We propose PILCO, a data-efficient and fully probabilistic model-based framework for autonomously learning transition dynamics and controllers in the absence of expert knowledge. In most autonomous learning scenarios either task-specific domain knowledge and/or many trials are required to learn a task. In practical applications, however, full knowledge about the underlying dynamics or thousands of trials might be unavailable/impractical. PILCO learns a probabilistic dynamics model from data only. By representing and incorporating model uncertainty into the decision making process, PILCO reduces model bias and fully automatically learns to solve fairly complicated control problems in only a few trials. Across multiple complicated control tasks, PILCO achieves an unprecedented degree of automation and an unprecedented speed of learning.

About the speaker:

Marc Peter Deisenroth received a Dr.-Ing. degree from the Karlsruhe Institute of Technology in 2009. Since 2010 he has been a postdoctoral researcher in the Robotics and State Estimation Lab, University of Washington, Seattle. He is an adjunct researcher at Intel Labs Seattle and the CBL Lab, University of Cambridge (UK). From 2006 to 2009, he was a researcher at the Max Planck Institute for Biological Cybernetics in Tuebingen (Germany) and at the University of Cambridge (UK). His research focuses on Bayesian inference, machine learning, robotics, and control.

Friday, February 11, 2011, 11:00AM



ACES 2.402

Visual Recognition with Humans in the Loop

Serge Belongie   [homepage]

University of California, San Diego

We present an interactive, hybrid human-computer method for object classification. The method applies to classes of problems that are difficult for most people, but are recognizable by people with the appropriate expertise (e.g., animal species or airplane model recognition). The classification method can be seen as a visual version of the 20 questions game, where questions based on simple visual attributes are posed interactively. The goal is to identify the true class while minimizing the number of questions asked, using the visual content of the image. Incorporating user input drives up recognition accuracy to levels that are good enough for practical applications; at the same time, computer vision reduces the amount of human interaction required. The resulting hybrid system is able to handle difficult, large multi-class problems with tightly-related categories. We introduce a general framework for incorporating almost any off-the-shelf multi-class object recognition algorithm into the visual 20 questions game, and provide methodologies to account for imperfect user responses and unreliable computer vision algorithms. We evaluate the accuracy and computational properties of different computer vision algorithms and the effects of noisy user responses on a dataset of 200 bird species and on the Animals With Attributes dataset. Our results demonstrate the effectiveness and practicality of the hybrid human-computer classification paradigm.

This work is part of the Visipedia project, in collaboration with Steve Branson, Catherine Wah, Florian Schroff, Boris Babenko, Peter Welinder and Pietro Perona.

About the speaker:

Serge Belongie received the B.S. degree (with honor) in Electrical Engineering from the California Institute of Technology in 1995 and the M.S. and Ph.D. degrees in Electrical Engineering and Computer Sciences (EECS) at U.C. Berkeley in 1997 and 2000, respectively. While at Berkeley, his research was supported by a National Science Foundation Graduate Research Fellowship. He is also a co-founder of Digital Persona, Inc., and the principal architect of the Digital Persona fingerprint recognition algorithm. He is currently an associate professor in the Computer Science and Engineering Department at U.C. San Diego. His research interests include computer vision and pattern recognition. He is a recipient of the NSF CAREER Award and the Alfred P. Sloan Research Fellowship. In 2004 MIT Technology Review named him to the list of the 100 top young technology innovators in the world (TR100).

Friday, February 11, 2011, 2:00PM



ACES 2.402

Discourse Modelling -- Past, Present and Future

Bonnie Webber   [homepage]

University of Edinburgh

I will look back through nearly forty years of computational research on discourse modelling, starting with simple early computational models based on regular expressions and context-free grammars, moving on to some recent successes in the development of algorithms and resources for discourse modelling that assume a less monolithic, more data-informed view of discourse, and concluding with some challenges that face us as we try to make effective use of discourse modelling in language technology.

About the speaker:

Bonnie Webber is a Professor in the School of Informatics, Edinburgh University. She is best known for work on Question Answering (starting with LUNAR in the early 70's) and discourse phenomena (starting with her PhD thesis on discourse anaphora). She has also carried out research on animation from instructions, medical decision support systems and biomedical text processing.

Monday, February 14, 2011, 4:00PM



ACES 2.302

Viewing Party for the Jeopardy! Challenge

James Fan (2/14)   [homepage]

IBM Research

Watson is an automated question-answering system built by IBM for the Jeopardy! Challenge. Watson will compete against the two best human players in the history of the game, Ken Jennings and Brad Rutter. The competition will be televised over three days - February 14, 15 and 16, 4:30-5:00 pm - and you're invited to join us.

On the first day, February 14, the Viewing Party begins at 4:00 in Avaya Auditorium, ACES 2.302. We'll start with an IBM-produced video that introduces Watson and the challenges it faces in natural-language processing, question answering and automated inference. At 4:30, we'll watch the first round of the Jeopardy! Challenge. At 5:00, we'll have an open discussion session led by Dr. James Fan, one of the core members of the Watson development team, and a PhD alum of our department.

On February 15, the Viewing Party will be held in ACES 2.402. On February 16, the Party returns to ACES 2.302. On both days, the event runs 4:20-5:00, with time afterward for discussion.

About the speaker:

James Fan is a research staff member at IBM Research. His research interests include question answering, knowledge representation and reasoning, natural language processing and machine learning. James is currently working on the DeepQA project which is advancing the state-of-the-art in automatic, open domain question answering technology. The DeepQA team is pushing question answering technology to levels of performance previously unseen and demonstrating the technology by playing Jeopardy! at the level of a human champion. Prior to joining IBM in 2006, James received his PhD at the University of Texas at Austin and did his dissertation on the topic of interpreting loosely encoded questions.

Tuesday, February 15, 2011, 4:20PM



ACES 2.402

Viewing Party for the Jeopardy! Challenge

James Fan (2/15)   [homepage]

IBM Research

Watson is an automated question-answering system built by IBM for the Jeopardy! Challenge. Watson will compete against the two best human players in the history of the game, Ken Jennings and Brad Rutter. The competition will be televised over three days - February 14, 15 and 16, 4:30-5:00 pm - and you're invited to join us.

On the first day, February 14, the Viewing Party begins at 4:00 in Avaya Auditorium, ACES 2.302. We'll start with an IBM-produced video that introduces Watson and the challenges it faces in natural-language processing, question answering and automated inference. At 4:30, we'll watch the first round of the Jeopardy! Challenge. At 5:00, we'll have an open discussion session led by Dr. James Fan, one of the core members of the Watson development team, and a PhD alum of our department.

On February 15, the Viewing Party will be held in ACES 2.402. On February 16, the Party returns to ACES 2.302. On both days, the event runs 4:20-5:00, with time afterward for discussion.

About the speaker:

James Fan is a research staff member at IBM Research. His research interests include question answering, knowledge representation and reasoning, natural language processing and machine learning. James is currently working on the DeepQA project which is advancing the state-of-the-art in automatic, open domain question answering technology. The DeepQA team is pushing question answering technology to levels of performance previously unseen and demonstrating the technology by playing Jeopardy! at the level of a human champion. Prior to joining IBM in 2006, James received his PhD at the University of Texas at Austin and did his dissertation on the topic of interpreting loosely encoded questions.

Wednesday, February 16, 2011, 4:20PM



ACES 2.302

Viewing Party for the Jeopardy! Challenge

James Fan (2/16)   [homepage]

IBM Research

Watson is an automated question-answering system built by IBM for the Jeopardy! Challenge. Watson will compete against the two best human players in the history of the game, Ken Jennings and Brad Rutter. The competition will be televised over three days - February 14, 15 and 16, 4:30-5:00 pm - and you're invited to join us.

On the first day, February 14, the Viewing Party begins at 4:00 in Avaya Auditorium, ACES 2.302. We'll start with an IBM-produced video that introduces Watson and the challenges it faces in natural-language processing, question answering and automated inference. At 4:30, we'll watch the first round of the Jeopardy! Challenge. At 5:00, we'll have an open discussion session led by Dr. James Fan, one of the core members of the Watson development team, and a PhD alum of our department.

On February 15, the Viewing Party will be held in ACES 2.402. On February 16, the Party returns to ACES 2.302. On both days, the event runs 4:20-5:00, with time afterward for discussion.

About the speaker:

James Fan is a research staff member at IBM Research. His research interests include question answering, knowledge representation and reasoning, natural language processing and machine learning. James is currently working on the DeepQA project which is advancing the state-of-the-art in automatic, open domain question answering technology. The DeepQA team is pushing question answering technology to levels of performance previously unseen and demonstrating the technology by playing Jeopardy! at the level of a human champion. Prior to joining IBM in 2006, James received his PhD at the University of Texas at Austin and did his dissertation on the topic of interpreting loosely encoded questions.

Tuesday, February 22, 2011, 11:00AM



ACES 2.402

Improving Predictions of IDA*'s Performance by Ignoring Information

Rob Holte   [homepage]

University of Alberta

In 1998 Korf and Reid launched a line of research aimed at creating practical methods for predicting exactly how many nodes the search algorithm IDA* would expand on an iteration with a specific depth-bound given a particular heuristic function. Zahavi, Felner, Burch, and Holte recently generalized the Korf and Reid work. The work presented in this talk represents the next advance in this line of research. Our main contribution is to identify a source of prediction error that had hitherto been overlooked. We call it the "discretization effect". Our second contribution is to disprove the intuitively appealing idea, specifically asserted to be true by Zahavi et al., that a "more informed" prediction system cannot make worse predictions than a "less informed" system. The possibility of this statement being false arises immediately from knowledge of the discretization effect, since a more informed system is likely to be more susceptible to the discretization effect than a less informed system. In many of our experiments, the more informed system makes poorer predictions. Our final contribution is a method for counteracting the discretization effect, which we call "epsilon-truncation". One way to view "epsilon-truncation is that it makes a prediction system less informed, in a carefully chosen way, so as to improve its predictions by avoiding the discretization effect. Experimental results show that epsilon-truncation substantially improves prediction accuracy for a variety of domains and heuristics.

About the speaker:

Dr. Robert Holte is a professor in the Computing Science Department and Vice Dean of the Faculty of Science at the University of Alberta. He is a well-known member of the international machine learning research community, former editor-in-chief of a leading international journal in this field ("Machine Learning"), and past director of the Alberta Ingenuity Centre for Machine Learning (AICML). His main scientific contributions are his seminal works on the performance of very simple classification rules and a technique ("cost curves") for cost-sensitive evaluation of classifiers. In addition to machine learning he undertakes research in single-agent search (pathfinding), in particular, the use of automatic abstraction techniques to speed up search.

Friday, March 4, 2011, 11:00AM



ACES 2.402

Adventures in Bootstrapping: Acquiring Lexical Knowledge for NLP

Ellen Riloff   [homepage]

University of Utah

Understanding natural language requires many types of lexical knowledge. Some lexical resources have been created (e.g., WordNet and FrameNet), but they are far from complete and they are rarely sufficient for informal jargon or specialized domains. Starting in 1997, the Utah NLP lab has been developing bootstrapping techniques to automatically acquire lexical knowledge from unannotated text collections. We have created several bootstrapping algorithms to induce semantic lexicons, as well as resources for subjectivity classification, event extraction, and plot unit analysis. Most recently, we used bootstrapping to create a contextual semantic tagger, given only seed words and domain-specific texts for training. In this talk, we will overview the bootstrapping methods that we have developed and try to distill out general lessons we have learned about what it takes to make bootstrapping work.

About the speaker:

Ellen Riloff is an Associate Professor of Computer Science in the School of Computing at the University of Utah. Her primary research areas are information extraction, semantic tagging and lexicon induction, coreference resolution, and subjectivity analysis. A major emphasis of her research has been automatically acquiring knowledge needed for natural language processing using bootstrapping methods that can learn from unannotated texts with minimal human supervision. Her professional roles have included service on the NAACL Executive Board, Human Language Technology (HLT) Advisory Board, DARPA/NSF Question Answering Roadmap Committee, Computational Linguistics Editorial Board, and as CoNLL Program Co-Chair and Faculty Advisor for the ACL Student Research Workshop.

Friday, March 11, 2011, 11:00AM



ACES 2.402

Temporal Causality and the Analysis of Interactions in Video

James Rehg   [homepage]

Georgia Institute of Technology

A basic goal of video understanding is the organization of video data into sets of events with associated temporal dependencies. For example, a soccer goal could be explained using a vocabulary of events such as passing, dribbling, tackling, etc. In describing the dependencies between events it is natural to invoke the concept of causality, but previous attempts to perform causal reasoning in video analysis have been limited to special cases, such as sporting events or naïve physics, where strong domain models are available. In this talk I will describe a novel, data-driven approach to the analysis of causality in video. The key to our approach is the representation of low-level visual events as the output of a multivariate point process, and the use of a nonparametric formulation of temporal causality to group event data into interacting subsets. This grouping process differs from standard motion segmentation methods in that it exploits the temporal structure in video over extended time scales. Our method is particularly well-suited to the analysis of social interactions, as it provides a means to organize sensor data and expose patterns of back-and-forth interaction. I will present results for categorizing and retrieving social games between parents and children from unstructured video collections. This application is part of a larger effort in using sensing, machine learning, and AI technologies to support the detection, treatment, and understanding of developmental disorders such as autism. I will present a brief overview of these activities, which are supported by a 2010 Expeditions in Computing Award from the National Science Foundation. This is joint work with Karthir Prabhakar, Sangmin Oh, Ping Wang, and Gregory Abowd.

About the speaker:

James M. Rehg (pronounced "ray") is a Professor in the School of Interactive Computing at the Georgia Institute of Technology, where he is the Director of the Center for Behavior Imaging, co-Director of the Computational Perception Lab, and Associate Director of Research in the Center for Robotics and Intelligent Machines. He received his Ph.D. from CMU in 1995 and worked at the Cambridge Research Lab of DEC (and then Compaq) from 1995-2001, where he managed the computer vision research group. He received the National Science Foundation (NSF) CAREER award in 2001, and the Raytheon Faculty Fellowship from Georgia Tech in 2005. He and his students have received a number of best paper awards, including best student paper awards at ICML 2005 and BMVC 2010. Dr. Rehg is active in the organizing committees of the major conferences in computer vision, most-recently serving as the General co-Chair for IEEE CVPR 2009. He has served on the Editorial Board of the International Journal of Computer Vision since 2004. He has authored more than 100 peer-reviewed scientific papers and holds 23 issued US patents. Dr. Rehg is currently leading a multi-institution effort to develop the science and technology of Behavior Imaging, funded by an NSF Expedition award (see www.cbs.gatech.edu for details).

Friday, March 25, 2011, 11:00AM



ACES 2.402

Translating English to KR languages using inverse lambda and parameter learning

Chitta Baral   [homepage]

Arizona State University

Our long term goal is to develop general methodologies to translate natural language text into a formal knowledge representation (KR) language. Our approach is inspired by Montague’s path breaking thesis (1970) of viewing English as a formal language and the research in natural language semantics. Our approach is based on PCCG (Probabilistic Combinatorial Categorial Grammars), λ-calculus and statistical learning of parameters. In an initial work, we start with an initial vocabulary consisting of λ-calculus representations of a small set of words and a training corpus of sentences and their representation in a KR language. We develop a learning based system that learns the λ-calculus representation of words from this corpus and generalizes it to words of the same category. The key and novel aspect in this learning is the development of Inverse Lambda algorithms which when given λ-expressions β and γ can come up with an α such that application of α to β (or β to α) will give us γ. We augment this with learning of weights associated with multiple meanings of words. Our current system produces improved results on standard corpora on natural language interfaces for robot command and control and database queries. In an ongoing work we are able to use patterns to make guesses regarding the initial vocabulary. This together with learning of parameters allow us to develop a fully automated (without any initial vocabulary) way to translate English to designated KR languages. Our overall system is a good example of integration of results from multiple sub-fields of AI and computer science: machine learning, knowledge representation, natural language processing, λ-calculus (functional programming) and ontologies.

About the speaker:

Chitta Baral is a professor at the Arizona State University. He obtained his B.Tech(Hons) degree from the Indian Institute of Technology, Kharagpur in 1987 and his M.S and Ph.D degrees from the University of Maryland at College Park in 1990 and 1991 respectively. Chitta's research interests are in the areas of Artificial Intelligence, Knowledge Representation, Cognitive Robotics, Logic Programming, Natural Language processing and application of all that to Molecular Biology. His research has been supported over the years by National Science Foundation, NASA, Science Foundation Arizona, United Space Alliance, ONR, and ARDA/DTO/IARPA. He received the NSF CAREER award in 1995. He authored the book ``Knowledge Representation, Reasoning, and Declarative Problem Solving'' published by Cambridge University Press. He was an associate editor of the Journal of AI Research and is an area editor of the ACM Transactions on Computational Logic. His recent research focus is on temporal specification of goals, reasoning about actions and change in the multi-agent domain, combining probabilistic and logical representation and reasoning, and most recently, on natural language understanding through a learning based approach of translating natural language to knowledge representation languages.

Monday, March 28, 2011, 1:00PM



ACES 2.402

Class-Based Contextualized Search

Paul N. Bennett   [homepage]

Microsoft Research

Information retrieval has made significant progress in returning relevant results for a single query. However, much search activity is conducted within a much richer context of a current task focus, recent search activities as well as longer-term preferences. For example, our ability to accurately interpret the current query can be informed by knowledge of the web pages a searcher was viewing when initiating the search or recent actions of the searcher such as queries issued, results clicked, and pages viewed. We develop a framework based on classification that enables representation of a broad variety of context including the searcher's long-term interests, recent activity, and current focus as a class intent distribution. We then demonstrate how that can be used to improve the quality of search results. In order to make such an approach feasible, we need reasonably accurate classification into a taxonomy, a method of extracting and representing a user's query and context as a distribution over classes, and a method of using this distribution to improve the retrieval of relevant results. We describe recent work to address each of these challenges. This talk presents joint work with Nam Nguyen, Krysta Svore, Susan Dumais, and Ryen White.

About the speaker:

Paul Bennett is a Researcher in the Context, Learning & User Experience for Search (CLUES) group at Microsoft Research where he works on using machine learning technology to improve information access and retrieval. His recent research has focused on classification-enhanced information retrieval, pairwise preferences, human computation, and text classification while his previous work focused primarily on ensemble methods, active learning, and obtaining reliable probability estimates, but also extended to machine translation, recommender systems, and knowledge bases. He completed his dissertation on combining text classifiers using reliability indicators in 2006 at Carnegie Mellon where he was advised by Profs. Jaime Carbonell and John Lafferty.

Friday, April 1, 2011, 11:00AM



ACES 2.402

Towards scalable, high-fidelity and mixed multi-agent simulation of manned/unamnned air traffic

Michal Pechoucek   [homepage]

Czech Technical University in Prague

Multi-agent systems and agent technologies enable simulation, testing and deployment of a wide range of air traffic control and management methods. In Agent Technology Center, we study free-flight concept as a modern approach to flexible collision free air traffic control. Results of our research have been validated on AGENTFLY - multi-agent simulation system. AGENTFLY is as a software prototype of a versatile modeling environment allowing high-fidelity simulation experiments of collective flight of unmanned aerial assets as well as an environment for testing different methods for collision detection and collision resolution in civilian traffic. AGENTFLY has been developed in cooperation with the US Air Force and US Army and is further extended in cooperation with the FAA ATC. AGENTFLY is currently being also deployed on the PROCERUS UAS platform. During my talk I will address the problem statement of free-flight oriented collision free air traffic planning, introduce the designed planning and collision avoidance methods and discuss system's scalability with respect to NAS modeling and also deployment on physical hardware platforms.

About the speaker:

Michal Pechoucek is a full professor in computer science at the Czech Technical University (CTU). He is the deputy Head of the Department of Cybernetics at CTU and the head of the Agent Technology Center at CTU. The research interests of Michal Pechoucek lie mainly in the fields of multi-agent simulation and modeling, coordination, social knowledge representation, multi-agent planning, multi-agent prototypes and test-beds and applications of agent-based computing into security related applications, UAV robotic coordination and air-traffic control. Michal Pechoucek has been a PI on more than 25 research contracts and grants provided by US Air Force, CERDEC US Army, Office for Naval Research. He run two research contracts provided by the FAA and cooperated on two additional research grants funded by NASA. His research has been also funded by industry including Rockwell Automation, FOXCON, Denso AUTOMOTIVE, CADANCE Design Systems and others. He has been awarded by the Czech Mind for Invention in 2010, Google research award in 2009, Czech Engineering Academy Award 2007. He was the AAMAS Industry track chair in 2005, AAMAS SPC in 2008 and 2010. He is a honorary member of Artificial Intelligence Application Institute at University of Edinburgh and member of advisory board of the Center for Advanced Information Technology, University of Binghamton. He is a Fulbright fellowship recipient funding his visiting professorship at University of Southern California. He is co-founder of Cognitive Security, and AgentFly Technologies, two Czech start-up companies. He is an advisory board member of the FL3XX, an Austrian start-up company.

Friday, April 8, 2011, 11:00AM



ACES 2.402

Adaptive Drama Management: Bringing Machine Learning to Interactive Entertainment

Charles Isbell   [homepage]

Georgia Tech

In recent years, there has been a growing interest in constructing rich interactive entertainment and training experiences. As these experiences have grown in complexity, there has been a corresponding growing need for the development of robust technologies to shape and modify those experiences in reaction to the actions of human participants.

When thinking about how machine learning and artificial intelligence could help, one notes that the traditional goal of AI games---to win the game---is not particularly useful; rather, the goal is to make the human player's play experience better while being consistent with the goals of the author.

In this talk, I will present our technical efforts to achieve this goal by using machine learning as a way to allow designers to specify problems in broad strokes while allowing a machine do further fine-tuning. In particular, I discuss (1) Targeted Trajectory Distribution Markov Decision Processes (TTD-MDPs), an extension of MDPs that provide variety of experience during repeated execution and (2) computational influence, an automated way of operationalizing theories of influence and persuasion from social psychology to help guide players without decreasing their feelings of autonomy. I also describe our evaluation of these techniques with both simulations and an interactive storytelling system with human subjects.

About the speaker:

Dr. Charles Lee Isbell, Jr. received his BS in computer science in 1990 from the Georgia Institute of Technology and his PhD in 1998 from the Massachusetts Institute of Technology. After four years at AT&T Labs, he returned to Georgia Tech as faculty at the College of Computing. Charles' research interests are varied, but recently he has been building autonomous agents that engage in life-long learning in the presence of thousands of other intelligent agents, including humans. His work has been featured in the popular media, including The New York Times and the Washington Post, as well as in technical collections, where he has won two best paper awards in this area. Charles also pursues reform in CS education. He was a developer of Threads, Georgia Tech's new structuring principle for computing curricula. Recently, he has become the Associate Dean of Academic Affairs for the College of Computing.

Friday, April 15, 2011, 2:00PM



JGB 2.218 (Note unusual time and place)

Deconvolutional Networks

Rob Fergus   [homepage]

New York University

We present a hierarchical model that learns image decompositions via alternating layers of convolutional sparse coding and max pooling. When trained on natural images, the layers of our model capture image information in a variety of forms: low-level edges, mid-level edge junctions, high-level object parts and complete objects. To build our model we rely on a novel inference scheme that ensures each layer reconstructs the input, rather than just the output of the layer directly beneath, as is common with existing hierarchical approaches. This makes it possible to learn multiple layers of representation and we show models with 4 layers, trained on images from the Caltech-101 and 256 datasets. Features extracted from these models, in combination with a standard classifier, outperform SIFT and representations from other feature learning approaches.

About the speaker:

Rob Fergus is currently an Assistant Professor of Computer Science at the Courant Institute of Mathematical Sciences, New York University. He obtained an MSc with Prof. Pietro Perona at Caltech, before completing a PhD with Prof. Andrew Zisserman at the University of Oxford. Before coming to NYU, he spent two years as a post-doc in the Computer Science and Artificial Intelligence Lab (CSAIL) at MIT, working with Prof. William Freeman.

Friday, April 29, 2011, 11:00AM



ACES 2.402

Compact preference models in single- and multi-agent settings

K. Brent Venable   [homepage]

University of Padova (Italy)

As preferences are fundamental for the analysis of human choice behavior, they are becoming of increasing importance for computational fields such as artificial intelligence (AI). Their embedding in intelligent systems calls for both expressive and, at the same time, compact representation models as well as for efficient reasoning machinery. In this talk we will give an overview of soft constraints and CP-nets: two of the most successful AI compact preference frameworks currently used to represent the preferences of a single agent. We will first discuss and compare their positive and negative features and then we will show how such models can be embedded in multi-agent settings, such as decision making via voting and stable matching problems.

About the speaker:

K. Brent Venable is currently assistant professor at the Dept. of Pure and Applied Mathematics at the University of Padova (Italy). Her main research interests are within artificial intelligence and regard, in particular, constraints, preferences, temporal reasoning and computational social choice. She is author or co-author of over 60 technical papers. She is involved in a lively international scientific exchange and, among others, she is currently collaborating with researchers from NASA Ames, SRI International, NICTA-UNSW (Australia), University of Amsterdam (The Netherlands), 4C (Ireland) and Ben-Gurion University (Israel).

Tuesday, May 3, 2011, 11:00AM



ACES 2.402

Unifying Logic and Probability: A Progress Report

Pedro Domingos   [homepage]

University of Washington

Intelligent agents must be able to handle the complexity and uncertainty of the real world. First-order logic is a good representation for complexity, and probabilistic graphical models for uncertainty. Unifying the two is a long-standing goal of AI. This talk samples the state of the art in this area, in four parts: representation, inference, learning, and applications. First, I will introduce Markov logic, a language that combines logic and probability by attaching weights to first-order formulas and viewing them as templates for features of Markov random fields. Second, I will describe probabilistic theorem proving, an inference procedure that combines theorem proving and graphical model inference. Third, we look at statistical relational learning, with a focus on learning the structure and weights of Markov logic networks. Fourth, we apply these techniques to problems in natural language processing, including coreference resolution and semantic parsing. (Joint work with Jesse Davis, Vibhav Gogate, Stanley Kok, Daniel Lowd, Aniruddh Nath, Hoifung Poon, Math Richardson, Parag Singla, Marc Sumner, and Jue Wang.)

About the speaker:

Pedro Domingos is Associate Professor of Computer Science and Engineering at the University of Washington. His research interests are in artificial intelligence, machine learning and data mining. He received a PhD in Information and Computer Science from the University of California at Irvine, and is the author or co-author of over 150 technical publications. He is a member of the editorial board of the Machine Learning journal, co-founder of the International Machine Learning Society, and past associate editor of JAIR. He was program co-chair of KDD-2003 and SRL-2009, and has served on numerous program committees. He is a AAAI Fellow, and has received several awards, including a Sloan Fellowship, an NSF CAREER Award, a Fulbright Scholarship, an IBM Faculty Award, and best paper awards at KDD-98, KDD-99, PKDD-05 and EMNLP-09.

[ FAI Archives ]

Fall 2013 - Spring 2014

Fall 2012 - Spring 2013

Fall 2011 - Spring 2012

Fall 2010 - Spring 2011

Fall 2009 - Spring 2010

Fall 2008 - Spring 2009

Fall 2007 - Spring 2008

Fall 2006 - Spring 2007

Fall 2005 - Spring 2006

Spring 2005

Fall 2004

Spring 2004

Fall 2003

Spring 2003

Fall 2002

Spring 2002

Fall 2001

Spring 2001

Fall 2000

Spring 2000