Forum for Artificial Intelligence


[ About FAI   |   Upcoming talks   |   Past talks ]

About FAI

The Forum for Artificial Intelligence meets every other week (or so) to discuss scientific, philosophical, and cultural issues in artificial intelligence. Both technical research topics and broader inter-disciplinary aspects of AI are covered, and all are welcome to attend!

If you would like to be added to the FAI mailing list, or have any questions or comments, please send email to Nick Jong or Lily Mihalkova.

Upcoming talks

Friday, November 3
3:00pm, ACES 2.402
Matthew Campbell
University of Texas at Austin, Department of Mechanical Engineering
Search Methods for Finding Optimal Graph Topologies
Wednesday, November 29
2:00pm, ACES 2.402
Charles Ofria
Michigan State University
Title TBA

Friday, November 3, 3:00pm

Coffee at 2:45pm

ACES 2.402

Search Methods for Finding Optimal Graph Topologies

Dr. Matthew Campbell   [homepage]

Mechanical Engineering
University of Texas at Austin

This research offers a fundamental new view of topology optimization. To date, topological synthesis approaches are simply augmentations of existing stochastic optimization techniques. The generic approach defined here combines aspects of existing optimization techniques, graph theory, mathematical programming, artificial intelligence, and shape and graph grammars. Graph transformation research has existed for nearly 40 years in an esoteric corner of artificial intelligence but only recently has the work been deemed useful in design automation as knowledge and heuristics of a particular problem domain can be encapsulated into rules. In this presentation, various example problems are presented that are in the process of being solved by these newly defined search methods.

About the speaker:

Dr. Campbell joined the Department of Mechanical Engineering at the University of Texas at Austin in 2000, and is currently an Associate Professor in the Manufacturing and Design area. His research focuses on computational methods that aid the engineering designer earlier in the design process than traditional optimization would. To date, he has been awarded $1.57 million in research funding, including the CAREER award for research into a generic graph topology optimization method. This research represents a culmination of past computational synthesis research including the automatic design of sheet metal components, multi-stable MEMS devices, MEMS resonators, function structures, and electro-mechanical configurations. Dr. Campbell is a member of the AAAI, the AIAA, Phi Kappa Phi Honor Society, Pi Tau Sigma Mechanical Engineering Honorary Fraternity, the ASME, the ASEE, and the Design Society and has been acknowledged with best paper awards at conferences sponsored by the latter three.

Wednesday, November 29, 2:00pm

Coffee at 1:45pm

ACES 2.402

Title TBA

Dr. Charles Ofria   [homepage]

Computer Science and Engineering
Michigan State University

Abstract TBA

About the speaker:

Bio TBA

Past talks

Thursday, July 13
1:00pm, ACES 2.402
William W. Cohen
CMU
A Framework for Learning to Query Heterogeneous Data
Friday, Aug. 25
11:00am, ACES 6.304
Paul Bennett,
CMU
Building Reliable Metaclassifiers for Text Learning
Thursday, Aug. 31
3:30pm, ACES 2.302
Deb Roy,
MIT
Meaning Machines
Thursday, Sep. 7
3:30pm, ACES 2.302
David Fogel,
Natural Selection, Inc.
Behind the Scenes with Blondie24: Evolving Intelligence in Checkers and Chess

Thursday, July 13, 1:00pm

Coffee at 12:45pm

ACES 2.402

A Framework for Learning to Query Heterogeneous Data

Dr. William W. Cohen   [homepage]

Machine Learning Department
Carnegie Mellon University

[Sign-up schedule for individual meetings]

A long-term goal of research on data integration is to develop data models and query languages that make it easy to answer structured queries using heterogeneous data. In this talk, I will describe a very simple query language, based on typed similarity queries, which are answered based on a graph containing a heterogeneous mixture of textual and non-textual objects. The similarity metric proposed is based on a lazy graph walk, which can be approximated efficiently using methods related to particle filtering. Machine learning techniques can be used to improve this metric for specific tasks, often leading to performance far better than plausible task-specific baseline methods. We experimentally evaluate several classes of similarity queries from the domains of analysis of biomedical text and personal information management: for instance, in one set of experiments, a user's personal information is represented as a graph containing messages, calendar information, social network information, and a timeline, and similarity search is used to find people likely to attend a meeting.

This is joint work with Einat Minkov and Andrew Ng.

About the speaker:

William Cohen received his bachelor's degree in Computer Science from Duke University in 1984, and a PhD in Computer Science from Rutgers University in 1990. From 1990 to 2000 Dr. Cohen worked at AT&T Bell Labs and later AT&T Labs-Research, and from April 2000 to May 2002 Dr. Cohen worked at Whizbang Labs, a company specializing in extracting information from the web. Dr. Cohen is member of the board of the International Machine Learning Society, and has served as an action editor for the Journal of Machine Learning Research, the journal Machine Learning and the Journal of Artificial Intelligence Research. He co-organized the 1994 International Machine Learning Conference, is the co-Program Committee Chair for the 2006 International Machine Learning Conference, and has served on more than 20 program committees or advisory committees.

Dr. Cohen's research interests include information integration and machine learning, particularly information extraction, text categorization and learning from large datasets. He holds seven patents related to learning, discovery, information retrieval, and data integration, and is the author of more than 100 publications.

Friday, Aug. 25, 11:00am

Coffee at 10:45am

ACES 6.304

Building Reliable Metaclassifiers for Text Learning

Dr. Paul Bennett   [homepage]

Computer Science Department
Carnegie Mellon University

[Sign-up schedule for individual meetings]

Appropriately combining information sources is a broad topic that has been researched in many forms. It includes sensor fusion, distributed data-mining, regression combination, classifier combination, and even the basic classification problem. After all, the hypothesis a classifier emits is just a specification of how the information in the basic features should be combined. This talk addresses one subfield of this domain: leveraging locality when combining classifiers for text classification.

After discussing and introducing improved methods for recalibrating classifiers, we define local reliability, dependence, and variance and discuss the roles they play in classifier combination. Using these insights, we motivate a series of reliability-indicator variables which intuitively abstract the input domain to capture the local context related to a classifier's reliability.

We then present our main methodology, STRIVE. STRIVE employs a metaclassification approach to learn an improved model which varies the combination rule by considering the local reliability of the base classifiers via the indicators. The resulting models empirically outperform state-of-the-art metaclassification approaches that do not use locality. Next, we analyze the contributions of the various reliability indicators to the combination model and suggest informative features to consider when redesigning the base classifiers. Finally, we show how inductive transfer methods can be extended to increase the amount of labeled training data for learning a combination model by collapsing data traditionally viewed as coming from different learning tasks.

About the speaker:

Paul Bennett is currently a Postdoctoral Fellow in the Language Technologies Institute at Carnegie Mellon University where he serves as Chief Learning Architect on the RADAR project. Paul's primary research interests are in text classification, information retrieval, ensemble methods, and calibration, with wider interests in statistical learning and applications of artificial intelligence in adaptive systems in general. His published work includes research on classifier combination, action-item detection, calibration, inductive transfer, machine translation, and recommender systems. Paul received his Ph.D. (2006) from the Computer Science Department at Carnegie Mellon University.

Thursday, Aug. 31, 3:30pm

Coffee at 3:15pm

ACES 2.302, Avaya Auditorium

Meaning Machines

Dr. Deb Roy   [homepage]

MIT Media Laboratory
MIT

[Sign-up schedule for individual meetings]

People use words to refer to the world as a means for influencing the beliefs and actions of others. Although many isolated aspects of the structure and use of language have been extensively studied, a unified model of situated language use remains unexplored. Any attempt to explain unconstrained adult language use appears futile due to the overwhelming complexity of the physical, cognitive, and cultural factors at play. A strategy for making progress towards a holistic account of language use is to study simple forms of language (e.g., conversational speech about objects and events in the here-and-now in limited social contexts) and strive for "vertically integrated" computational models. I will present experiments guided by this strategy in building conversational robots and natural language interfaces for video games. An emerging framework suggests a semiotic perspective may be useful for designing systems that process language grounded in social and physical context.

About the speaker:

Deb Roy is Associate Professor of Media Arts and Sciences at the Massachusetts Institute of Technology. He is Director of the Cognitive Machines Group at the MIT Media Laboratory which he founded in 2000. Roy also directs the 10x research program, a lab-wide effort to design new technologies for enhancing human cognitive and physical capabilities. Roy has published numerous peer-reviewed papers in the areas of knowledge representation, speech and language processing, machine perception, robotics, information retrieval, cognitive modeling, and human-machine interaction, and has served as guest editor of the journal Artificial Intelligence. He has lectured widely in academia and industry. His work has been featured in various popular press venues including the New York Times, the Globe and Mail, CNN, BBC, and PBS. In 2003 Roy was appointed AT&T Career Development Professor. He holds a B.A.Sc. in Computer Engineering from University of Waterloo, and a Ph.D. in Media Arts and Sciences from MIT.

Thursday, Sep. 7, 3:30pm

Coffee at 3:15pm

ACES 2.302, Avaya Auditorium

Behind the Scenes with Blondie24: Evolving Intelligence in Checkers and Chess

Dr. David Fogel   [homepage]

Chief Executive Officer
Natural Selection, Inc.

[Sign-up schedule for individual meetings]

Blondie24 is a self-learning checks program that taught itself to play at the level of human experts. Starting with only rudimentary information about the location, number, and types of checkers pieces on the board, Blondie24 learned to play well enough to be ranked in the top 500 of 120,000 checkers players registered at Microsoft's zone.com. The program uses a simple evolutionary algorithm to optimize neural networks as board evaluators. Any sophisticated features used to interpret the positions of pieces were invented within the neural network. Furthermore, the evolving neural networks were not told whether they won, lost, or drew any specific game; instead, the only feedback they received was a point score associated with an overall result of playing a random number of games. In so doing, the line of research addressed two fundamental issues raised by Arthur Samuel and Allen Newell over three decades ago: Can a computer invent features in checkers and can a computer learn how play without receiving explicit credit assignment? A similar process has also been applied to chess (Blondie25). Starting with an open source program rated about 1800 (Class A), the evolved program has demonstrated grandmaster-level performance. The lecture will provide motivation and technical details for this research, as well as offer materials not found in any technical or book treatments of the development. Attendees will be able to challenge Blondie to a game, if they like.

About the speaker:

Dr. David Fogel is chief executive officer of Natural Selection, Inc. in La Jolla, California. Dr. Fogel has over 200 technical publications and 6 books, including Blondie24: Playing at the Edge of AI (Morgan Kaufmann, 2002) and How to Solve It: Modern Heuristics (with Zbigniew Michalewicz, 2nd ed., Springer 2005, translated into Chinese and Polish). Among many leadership roles, Dr. Fogel was the founding editor-in-chief of the IEEE Transactions on Evolutionary Computation (1996-2002), general chairman for the 2002 IEEE World Congress on Computational Intelligence, and will chair the upcoming 2007 IEEE Symposium Series in Computational Intelligence to be held April 1-5, 2007 in Honolulu, Hawaii. He was elected a Fellow of the IEEE in 1999 and received the 2004 IEEE Kiyo Tomiyasu Technical Field Award. He was elected president-elect of the IEEE Computational Intelligence Society for 2007.

Past Schedules

Fall 2005 - Spring 2006

Spring 2005

Fall 2004

Spring 2004

Fall 2003

Spring 2003

Fall 2002

Spring 2002

Fall 2001

Spring 2001

Fall 2000

Spring 2000