Forum for Artificial Intelligence

Archive


Forum for Artificial Intelligence

[ Home   |   About FAI   |   Upcoming talks   |   Past talks ]



This website is the archive for past Forum for Artificial Intelligence talks. Please click this link to navigate to the list of current talks.

FAI meets every other week (or so) to discuss scientific, philosophical, and cultural issues in artificial intelligence. Both technical research topics and broader inter-disciplinary aspects of AI are covered, and all are welcome to attend!

If you would like to be added to the FAI mailing list, subscribe here. If you have any questions or comments, please send email to Catherine Andersson.






[ Upcoming talks ]





Wed, October 2
2:00PM
Tara Estlin
NASA Jet Propulsion Laboratory
AI Technology in Space
Thu, October 3
4:00PM
Daniel Dvorak
NASA Jet Propulsion Laboratory
MDS: A State/Model-Based Control Architecture for Robotic Systems
Fri, October 4
11:00AM
David Chiang
Information Sciences Institute at USC
Learning Syntax and Semantics for Natural Language Translation
Thu, October 17
11:00AM
Milind Tambe
University of Southern California
Security and Game Theory: Key Algorithmic Principles, Deployed Applications, Research Challenges
Fri, November 22
11:00AM
Kevin Murphy
Google Inc.
From Big Data to Big Knowledge
Wed, January 15
11:00AM
Esra Erdem
Sabanci University
A Formal Hybrid Framework for Robotic Manipulation
Fri, January 31
11:00AM
Sanjeev J. Koppal
Texas Instruments
Lithographic Vision Sensors
Fri, February 28
11:00AM
CANCELED: Barbara Grosz
Harvard University
CANCELED: Health Care Coordination and a Multi-Agent Systems "Turing Challenge"
Fri, March 21
2:00PM
Joyce Chai
Michigan State University
Bridging the Gaps of Perception and Action towards Situated Human-Robot Dialogue
Thu, April 3
3:00PM
Carla Brodley
Tufts University
Class-Level Constraint-Based Clustering and its Application to Remote Sensing, AAAI 2014 Keywords and Multiple Sclerosis
Fri, April 4
11:00AM
Svetlana Lazebnik
University of Illinois at Urbana-Champaign
Broad-Coverage Scene Parsing with Object Instances and Occlusion Ordering
Tue, April 8
2:00PM
Rob Holte
University of Alberta
Towards a High-Performance Rapid Prototyping Tool for State-Space Search
Thu, April 10
9:30AM
Jim Bednar
University of Edinburgh School of Informatics
A mechanistic model of the development and function of the primary visual cortex
Wed, April 16
3:00PM
Charles Ofria
Michigan State University
The Evolution of Division of Labor and Specialization in Digital Organisms
Fri, April 18
11:00AM
Aaron Hertzman
Adobe Research
Computational Graphic Design: Colors, Fonts, and Layout
Tue, April 22
11:00AM
Bruno Castro da Silva
University of Massachusetts Amherst
Learning Parameterized Motor Skills
Wed, April 30
11:00AM
Ian Davidson* and Jieping Ye**
*UC Davis; **Arizona State University
Advances in Active and Transfer Learning
Fri, May 2
11:00AM
Colin Bannard
UT Austin
Imitation vs innovation in children's language learning
Tue, June 3
11:00AM
Pat Langley
University of Auckland / Carnegie Mellon Silicon Valley
Social Understanding and Planning: Two Challenges for Cognitive Systems
Tue, June 10
2:00PM
Craig A. Knoblock
University of Southern California
Creating and Using Linked Knowledge

Wednesday, October 2, 2013, 2:00PM



AI Technology in Space

Tara Estlin   [homepage]

NASA Jet Propulsion Laboratory

The NASA Jet Propulsion Laboratory has 17 spacecraft operating across the solar system. One important technology area for many current and future missions is artificial intelligence. AI is being used on a range of robotic platforms from the Mars rovers to spacecraft exploring the surface of comet. This talk will cover both current AI applications for NASA spacecraft and research directed towards future exploration capabilities.

One large area for AI is assisting in surface exploration on Mars. Advances in rover mobility have enabled rovers to travel many kilometers, providing new opportunities for scientific discovery. Dr. Estlin will provide an overview of the AEGIS system, which uses computer vision techniques to autonomously identify science targets during rover exploration. Future versions of this system will use machine learning methods to increase system accuracy and identify a larger range of science features.

Other presented applications will include the use of onboard image processing and automated planning techniques to direct orbiting spacecraft including an Earth satellite and a spacecraft en route to a distant comet. In both of these projects, AI Is used to analyze science data, and quickly plan new science activities, significantly increasing the ability of these spacecraft to collect valuable data.

About the speaker:

Dr. Tara Estlin has over 15 years of experience in developing robotic AI software. A primary goal of her technology efforts is to support autonomous capabilities for future space missions. Dr. Estlin is currently leading the AEGIS Project, which is providing automated targeting technology for remote sensing instruments on the Mars Exploration Rover (MER) mission and the Mars Science Laboratory (MSL) mission. AEGIS was awarded the 2011 NASA Software of the Year award. For the past nine years, she also has been a rover driver for the MER mission where she is responsible for sequencing drive and arm deployment commands for the MER Spirit and Opportunity rovers.

Dr. Estlin is currently the Deputy Chief Technologist for the JPL Mission Systems and Operations Division. She holds a B.S. in computer science from Tulane University and M.S. and Ph.D. degrees in computer science from the University of Texas at Austin.

Thursday, October 3, 2013, 4:00PM



MDS: A State/Model-Based Control Architecture for Robotic Systems

Daniel Dvorak   [homepage]

NASA Jet Propulsion Laboratory

Systems engineers have typically described software systems differently from their software engineering peers, and this frequently leads to incomplete or ambiguous communication. However, software engineering and systems engineering remain highly interdependent. Systems engineers must understand what the system must do (and document this understanding in the form of system specifications), while software engineers must design how the system will do it (and realize the design in implemented software artifacts). As an ever-increasing element of system design, this relationship has become quite problematic.

MDS confronts this growing interdependence between systems and software engineering with a more integrated approach to engineering complex systems. The MDS architecture provides the means for software engineers and systems engineers to communicate through a common language, and thus bridges the traditional gap between software requirements and software implementation. State Analysis augments this architecture with a principled methodology for developing and specifying system capability in terms defined by the architecture, and the MDS frameworks, embodying the architecture, simplify the translation of these specifications into software implementation. As a result, software engineers and systems engineers share a common model-based approach to defining, describing, developing, understanding, verifying, validating, operating, and visualizing what systems do. The net result is systems that are more reliable, cost-effective, and reusable.

The architectural principles and patterns described in this talk were first envisioned for use in deep space missions, but they are widely applicable to other fields, particularly for systems having complex interactions and dynamics, and which must continue operating in the presence of failures and other unpredictable events.

About the speaker:

Dr. Daniel Dvorak is a principal engineer in the Systems Engineering and Formulation Division at JPL working in software architecture for robotic systems, model-based systems engineering, fault management design, and human-robotic operations. Dan led the NASA study on flight software complexity and co-leads the NASA Software Architecture Review Board. Dan has served as PI for several R&D tasks related to systems/software control architecture for semi-autonomous robotic systems. Prior to joining JPL in 1996, Dan worked at AT&T Bell Laboratories on several projects including a system for monitoring the nation’s 4ESS electronic switching systems for long-distance telephone traffic, and a rule-based extension to the C++ language. Dan holds a Ph.D. in computer science from The University of Texas at Austin, an MS in computer engineering from Stanford University, and a BS in electrical engineering from Rose-Hulman Institute of Technology.

Friday, October 4, 2013, 11:00AM



Learning Syntax and Semantics for Natural Language Translation

David Chiang   [homepage]

Information Sciences Institute at USC

Automatic translation of human languages is one of the oldest problems in computer science. Two general approaches have been taken: one which relies heavily on knowledge of linguistic structure and meaning, and the other which relies on statistics from large amounts of data. For years, these two approaches seemed at odds with each other, but recent developments have made great progress towards building translation systems according to the maxim, "Linguistics tells us what to count, and statistics tells us how to count it" (Joshi). I will give an overview of three such developments from ISI. The first is the introduction of formal grammars (namely, synchronous context-free grammars) to model the syntax of human languages, first successfully demonstrated by my system Hiero. The second is ongoing work at ISI to incorporate knowledge of formal semantics. I will describe the formalism we are currently working with (synchronous hyperedge replacement grammars) and the efficient algorithms we have developed for processing semantic structures. Finally, I will discuss initial results on learning word meanings using neural networks, and prospects for learning them across languages.

About the speaker:

David Chiang is Research Assistant Professor in the USC Department of Computer Science and Project Leader at the USC Information Sciences Institute. He earned his PhD from the University of Pennsylvania in 2004 under Aravind Joshi. His research is on computational models for learning human languages, particularly how to translate from one language to another. His work on applying formal grammars and machine learning to translation has been recognized with two best paper awards (at ACL 2005 and NAACL HLT 2009) and has transformed the field of machine translation. He has received research grants from DARPA, NSF, and Google, has served on the executive board of NAACL and the editorial board of Computational Linguistics, and is currently on the editorial board of Transactions of the ACL.

Thursday, October 17, 2013, 11:00AM



Security and Game Theory: Key Algorithmic Principles, Deployed Applications, Research Challenges

Milind Tambe   [homepage]

University of Southern California

Security is a critical concern around the world, whether it is the challenge of protecting ports, airports and other critical infrastructure, interdicting the illegal flow of drugs, weapons and money, protecting endangered species, forests and fisheries, suppressing crime in urban areas or security in cyberspace. Unfortunately, limited security resources prevent full security coverage at all times. Instead, these limited security resources must be allocated and scheduled randomly and efficiently. The security resource allocation must simultaneously take into account an adversary's response to the security coverage (e.g., an adversary can exploit predictability in security allocation), the adversary's preferences and the potential uncertainty over such preferences and capabilities.

Computational game theory can help us build decision-aids for efficient, randomized security resource allocation. Indeed, by casting the security allocation problem as a Bayesian Stackelberg game, we have developed new algorithms that have been deployed over multiple years in multiple applications: for security of ports and ferry traffic with the US coast guard (currently deployed in the ports of New York/New Jersey, Boston, Los Angeles/Long Beach and now in preparation for deployment at other ports), for security of airports and air traffic with the the Federal Air Marshals (FAMS) and the Los Angeles World Airport (LAX) police, and for security of metro trains with the Los Angeles Sheriff's Department (LASD) and the TSA, with additional applications under development. These applications are leading to real-world use-inspired research and application in the emerging research area of “security games”: these research challenges include scaling up of security games to large-scale problems, handling significant adversarial uncertainty, dealing with bounded rationality of human adversaries and other interdisciplinary challenges. I will provide an overview of my research's group's work in this area, outlining key algorithmic principles, research results, as well as a discussion of our deployed systems and lessons learned.

(*) This is joint work with a number of former and current PHD students, postdocs, and other collaborators, all listed at: http://teamcore.usc.edu/security

About the speaker:

Milind Tambe is Helen N. and Emmett H. Jones Professor in Engineering at the University of Southern California(USC). He is a fellow of AAAI, recipient of the ACM/SIGART Autonomous Agents Research Award, Christopher Columbus Fellowship Foundation Homeland security award, the INFORMS Wagner prize for excellence in Operations Research practice and the Rist Prize of the Military Operations Research Society. Prof. Tambe has contributed several foundational papers in agents and multiagent systems; this includes areas of multiagent teamwork, distributed constraint optimization (DCOP) and security games. For this research, he has received the "influential paper award" from the International Foundation for Agents and Multiagent Systems(IFAAMAS), as well as with his research group, best paper awards at a number of premier Artificial Intelligence Conferences and workshops, including multiple best paper awards at the International Conference on Autonomous Agents and Multiagent Systems and International Conference on Intelligent Virtual Agents. In addition, the ''security games'' framework and algorithms pioneered by Prof. Tambe and his research group are now deployed for real-world use by several agencies including the US Coast Guard, the US Federal Air Marshals service, the Transportation Security Administration, LAX Police and the LA Sheriff's Department for security scheduling at a variety of US ports, airports and transportation infrastructure. This research has led to him and his students receiving the US Coast Guard Meritorious Team Commendation from the Commandant, US Coast Guard First District's Operational Excellence Award, Certificate of Appreciation from the US Federal Air Marshals Service and special commendation given by the Los Angeles World Airports police from the city of Los Angeles. Additionally, for his research Prof. Tambe has also received the IBM Faculty Award, Okawa foundation faculty research award, RoboCup scientific challenge award and USC Viterbi School of Engineering use-inspired research award. Finally, for his teaching and service, Prof. Tambe has received the USC Steven B. Sample Teaching and Mentoring award and the ACM recognition of service award. Recently, he co-founded ARMORWAY, a company focused on risk mitigation and security resource optimization, where he serves on the board of directors. Prof. Tambe received his Ph.D. from the School of Computer Science at Carnegie Mellon University.

Friday, November 22, 2013, 11:00AM



From Big Data to Big Knowledge

Kevin Murphy   [homepage]

Google Inc.

We are drowning in big data, but a lot of it is hard to interpret. For example, Google indexes about 40B webpages, but these are just represented as bags of words, which don't mean much to a computer. To get from "strings to things", Google introduced the Knowledge Graph (KG), which is a database of facts about entities (people, places, movies, etc.) and their relations (nationality, geo-containment, actor roles, etc). KG is based on Freebase, but supplements it with various other structured data sources. Although KG is very large (about 500M nodes/ entities, and 30B edges/ relations), it is still very incomplete. For example, 94\% of the people are missing their place of birth, and 78\% have no known nationality - these are examples of missing links in the graph. In addition, we are missing many nodes (corresponding to new entities), as well as new {\em types} of nodes and edges (corresponding to extensions to the schema). In this talk, I will survey some of the efforts we are engaged in to try to "grow" KG automatically using machine learning methods. In particular, I will summarize our work on the problems of entity linkage, relation extraction, and link prediction, using data extracted from natural language text as well as tabular data found on the web.

About the speaker:

Kevin Murphy is a research scientist at Google in Mountain View, California, where is working on information extraction and probabilistic knowledge bases. Before joining Google in 2011, he was an associate professor of computer science and statistics at the University of British Columbia in Vancouver, Canada. Before starting at UBC in 2004, he was a postdoc at MIT. Kevin got his BA from U. Cambridge, his MEng from U. Pennsylvania, and his PhD from UC Berkeley. He has published over 50 papers in refereed conferences and journals related to machine learning and graphical models, as well as an 1100-page textbook called "Machine Learning: a Probabilistic Perspective" (MIT Press, 2012), which is currently the best selling machine learning book on Amazon.com. Kevin is also the (co) Editor-in-Chief of the Journal of Machine Learning Research.

Wednesday, January 15, 2014, 11:00AM



A Formal Hybrid Framework for Robotic Manipulation

Esra Erdem   [homepage]

Sabanci University

Robotic manipulation has the aim of automatic generation of robot motion sequences for manipulation of movable objects among obstacles, to achieve a desired goal configuration. Some of these objects can only move when picked up by robots, and the order of pick-and-place operations for manipulation may matter to obtain a feasible kinematic solution. Therefore, geometric reasoning and motion planning alone are not sufficient to solve these manipulation problems, and planning of actions such as the pick-and-place operations need to be integrated with the motion planning problem. We present a modular framework that combines these two sorts of reasoning, using expressive formalisms and efficient solvers of answer set programming. We illustrate applications of this framework to complex robotic manipulation tasks that require concurrent execution of actions, with both dynamic simulations and physical implementations, using multiple Kuka youBots.

About the speaker:

Esra Erdem is an associate professor in Computer Science and Engineering at Sabanci University. She received her Ph.D. in computer sciences at the University of Texas at Austin (2002), and visited University of Toronto and Vienna University of Technology for postdoctoral research (2002-2006). Her research is in the area of artificial intelligence, in particular, the mathematical foundations of knowledge representation and reasoning, and their applications to cognitive robotics and computational biology.

Friday, January 31, 2014, 11:00AM



Lithographic Vision Sensors

Sanjeev J. Koppal   [homepage]

Texas Instruments

Miniature computing platforms will influence fields such as geographic and environment sensing, search and rescue, industrial control and monitoring, energy and health. Computer vision algorithms can broaden this impact by allowing small devices to utilize the rich visual information of their surroundings. However, achieving computer vision on small form factor devices is a challenge due to the severe constraints of power and mass.

Lithographic vision sensors are a class of devices that allow computer vision in these scenarios by leveraging two characteristics. First, every part of the sensor is jointly designed with the computer vision task in mind, in order to extract the maximum energy efficiency. Second, the optics of the system perform a significant portion of the computational burden. Balancing the performance of any particular computer vision algorithm with the physical aspects of a lithographic vision sensor (such as field-of-view, mass, power consumption etc) provides a rich source of interesting, new research problems.

Recent advances in material science and small-scale fabrication coupled with the rapid prototyping revolution have significantly increased the ease of designing, building, testing and iterating these devices. A key target of this research is to create a dictionary of low-power vision sensors for different task specific situations, analogous to how biological eyes are well suited for particular conditions. The broader goal is to build a general optimization and design framework that can produce these task-specific designs given certain physical constraints. In that sense, the ultimate goal is to build a "compiler" for visual sensors.

About the speaker:

Sanjeev J. Koppal obtained his Masters and Ph.D. degrees from the Robotics Institute at Carnegie Mellon University, where his adviser was Prof. Srinivasa Narasimhan. After CMU, he was a post-doctoral research associate in the School of Engineering and Applied Sciences at Harvard University, with Prof. Todd Zickler. He received his B.S. degree from the University of Southern California in 2003. His interests span computer vision and computational photography and include novel cameras and micro sensors, digital cinematography, 3D cinema, image-based/light-field rendering, appearance modeling, 3D reconstruction, physics-based vision and active illumination. He is currently a researcher at Texas Instruments Imaging R&D. In the Spring of 2014 he will be joining the University of Florida's ECE department as an assistant professor.

Friday, February 28, 2014, 11:00AM



CANCELED: Health Care Coordination and a Multi-Agent Systems "Turing Challenge"

CANCELED: Barbara Grosz   [homepage]

Harvard University

I recently argued that Turing, were he alive now, would conjecture differently than he did in 1950, and I suggested a new “Turing challenge” question, “Is it imaginable that a computer (agent) team member could behave, over the long term and in uncertain, dynamic environments, in such a way that people on the team will not notice it is not human”. In the last several decades, the field of multi-agent systems has developed a vast array of techniques for cooperation and collaboration as well as for agents to handle adversarial or strategic situations. Even so, current generation agents are unlikely to meet this new challenge except in very simple situations. Meeting the challenge requires new algorithms and novel plan representations. This talk will explore the implications of this new “Turing question” in the context of my group’s recent work on developing intelligent agents able to work on a team with health care providers and patients to improve care coordination. Our goal is to enable systems to support a diverse, evolving team in formulating, monitoring and revising a shared “care plan” that operates on multiple time scales in uncertain environments. The coordination of care for children with complex conditions, which is a compelling societal need, is presented as a model environment in which to develop and assess such systems. The talk will focus in particular on challenges of interruption management, information sharing, and crowdsourcing for health literacy.

About the speaker:

Barbara J. Grosz is Higgins Professor of Natural Sciences in the School of Engineering and Applied Sciences at Harvard University. From 2001-2011, she served as dean of science and then dean of the Radcliffe Institute for Advanced Study at Harvard. Grosz is known for her seminal contributions to the fields of natural-language processing and multi-agent systems. She developed some of the earliest computer dialogue systems and established the research field of computational modeling of discourse. Her work on models of collaboration helped establish that field and provides the framework for several collaborative multi-agent and human-computer interface systems. Grosz is a member of the National Academy of Engineering, the American Philosophical Society, and the American Academy of Arts and Sciences and a fellow of the Association for the Advancement of Artificial Intelligence (AAAI), the Association for Computing Machinery, and the American Association for the Advancement of Science. In 2009, she received the ACM/AAAI Allen Newell Award for “fundamental contributions to research in natural language processing and in multi-agent systems, for her leadership in the field of artificial intelligence, and for her role in the establishment and leadership of interdisciplinary institu­tions.” She served as president of the AAAI from 1993-1995 and on the Boards of IJCAI (Chair 1989-91) and IFAAMAS.

Friday, March 21, 2014, 2:00PM



Bridging the Gaps of Perception and Action towards Situated Human-Robot Dialogue

Joyce Chai   [homepage]

Michigan State University

A new generation of robots have emerged in recent years which serve as humans’ assistants and companions. However, due to significantly mismatched capabilities in perception and action between humans and robots, natural language based communication becomes difficult. First, the robot’s representation of its perceived world and its action space are often continuous and numerical in nature. But human language is discrete and symbolic. For the robot to understand human language and thus take corresponding actions, it needs to first ground the meanings of human language to its own sensorimotor representation. Second, the robot may not have complete knowledge about the shared environment and the joint task. It may not be able to connect human language to its own representations. It is important for the robot to continuously acquire new knowledge through interaction with humans and the environment.

To address these challenges, we have been working towards bridging the gaps of perception and action between humans and robots. To mediate perceptual differences, we have developed graph-based approaches to support collaborative dialogue for referential grounding. We have also developed collaborative models for referring expression generation that takes into account of perceptual differences. To bridge the gap of action, we are exploring approaches for the robot (a robotic arm) to learn new high-level actions through natural language dialogue. In this talk, I will give an introduction to this line of research and discuss our approaches and empirical results.

About the speaker:

Joyce Chai is a Professor in the Department of Computer Science and Engineering at Michigan State University. She received a Ph.D. in Computer Science from Duke University in 1998. Prior to joining MSU in 2003, she was a Research Staff Member at IBM T. J. Watson Research Center. Her research interests include natural language processing, situated dialogue agents, information extraction and retrieval, and intelligent user interfaces. She has served as a Program Co-chair for the Annual Meeting of Special Interest Group on Dialogue and Discourse (SIGDIAL) in 2011 and a Program Co-chair for the ACM International Conference on Intelligent User Interfaces in 2014. She is a recipient of the National Science Foundation Career Award in 2004 and a co-recipient of the Best Long Paper Award from the Annual Meeting of Association of Computational Linguistics (ACL) in 2010.

Thursday, April 3, 2014, 3:00PM



Class-Level Constraint-Based Clustering and its Application to Remote Sensing, AAAI 2014 Keywords and Multiple Sclerosis

Carla Brodley   [homepage]

Tufts University

We present class-level constraint-based clustering motivated by two new general applications: redefining class definitions via constraint-based clustering and removing confounding factors when clustering. Class definitions for supervised machine learning are often created for a particular end-use with limited regard as to whether the data supports these distinctions. There are two potential issues from the point of view of creating an accurate classifier. First, the features may not support the required class distinctions. Second, class definitions may change over time and thus need to be re-examined. We present and evaluate our proposed solution of class-level constraint-based clustering algorithm in the context of two motivating domains: redefining the landcover classification scheme for creating maps of global land cover of the Earth, and rediscovering the set of keywords for AAAI 2014. The second half of the talk proposes an approach to applying constraint-based clustering to remove confounding factors, which if left in the data can lead to undesirable clustering results. For example in medical datasets, age is often a confounding factor in tests designed to judge the severity of a patient's disease through measures of mobility, eyesight and hearing. In such cases, removing age from each instance will not remove its effect from the data as other features will be correlated with age. Motivated by the need to find homogeneous groups of multiple sclerosis patients, we apply our approach to remove physician subjectivity from patient data. The result is a promising novel grouping of patients that can help uncover the factors that impact disease progression in MS.

About the speaker:

Carla E. Brodley is a professor in the Department of Computer Science at Tufts University and holds a secondary appointment in the Clinical and Translational Science Institute, Tufts Medical Center. She received her PhD in computer science from the University of Massachusetts, at Amherst in 1994. From 1994-2004, she was on the faculty of the School of Electrical Engineering at Purdue University, West Lafayette, Indiana. She joined the faculty at Tufts in 2004. Professor Brodley's research interests include machine learning, knowledge discovery in databases, health IT, and personalized medicine. She has worked in the areas of intrusion detection, anomaly detection, classifier formation, unsupervised learning and applications of machine learning to remote sensing, computer security, neuroscience, digital libraries, astrophysics, content-based image retrieval of medical images, computational biology, chemistry, evidence-based medicine, and personalized medicine. She served as chair of the Computer Science Department at Tufts from 2010-2013. In 2001 she served as program co-chair for the International Conference on Machine Learning (ICML) and in 2004, she served as the general chair for ICML. In 2004-2005 she was a member of the Defense Science Study Group. She was a member of the CRA-board of directors from 2008-2012, she was on the AAAI council from 2008-2011 and she co-chaired CRA-W from 2008-2011. Currently she is on the editorial boards of JMLR, Machine Learning and DKMD, she is a board member of the International Machine Learning Society, she is co-chairing AAAI in 2014, and she is a member of ISAT.

Friday, April 4, 2014, 11:00AM



Broad-Coverage Scene Parsing with Object Instances and Occlusion Ordering

Svetlana Lazebnik   [homepage]

University of Illinois at Urbana-Champaign

I will present our work on image parsing, or segmenting an image and labeling its regions with semantic categories (e.g., sky, ground, tree, person, etc.). Our aim is to achieve broad coverage across hundreds of object categories in diverse, large-scale datasets of realistic indoor and outdoor scenes. First, I will introduce our baseline nonparametric region-based parsing system that can easily scale to datasets with tens of thousands of images and hundreds of labels. Next, I will describe our approach for combining this region-based system with per-exemplar sliding window detectors to improve parsing performance on small object classes, which achieves state-of-the-art results on several challenging datasets. Finally, I will describe our most recent work that goes beyond per-pixel labels and infers the spatial extent of individual object instances together with their occlusion relationships.

About the speaker:

Svetlana Lazebnik received her Ph.D. at the University of Illinois at Urbana-Champaign in 2006. From 2007 to 2011, she was an assistant professor of computer science at the University of North Carolina in Chapel Hill, and in 2012 she has returned to the University of Illinois as a faculty member. She is the recipient of an NSF CAREER Award, a Microsoft Research Faculty Fellowship, and a Sloan Foundation Fellowship. She is a member of the DARPA Computer Science Study Group and of the editorial board of the International Journal of Computer Vision. Her research interests focus on scene understanding and modeling the content of large-scale photo collections.

Tuesday, April 8, 2014, 2:00PM



Towards a High-Performance Rapid Prototyping Tool for State-Space Search

Rob Holte   [homepage]

University of Alberta

State-space search involves finding solution paths in very large state spaces, such as puzzles (e.g. Rubik's Cube), logistics problems, or edit-distance problems (e.g. biological sequence alignment). Traditionally, search performance has been maximized by writing special-purpose code for each new state space. This has the disadvantage that it requires considerable human effort and ingenuity, is potentially error-prone, and it minimizes the amount of code re-use that is possible. An alternative is a rapid prototyping tool in which completely generic algorithm implementations and data structures are used, and the human's role is simply to specify the state space in a high-level language. This maximizes code re-use and minimizes human effort and error but potentially is much less efficient in terms of run time and/or memory usage.

In this talk I describe our research towards getting the best of both approaches by compiling a state space description into C code. I illustrate this approach with psvn2c, a state-space compiler I have developed together with Neil Burch. The second half of the talk focuses on a powerful component of psvn2c called move pruning. It is a fully automatic analysis that can reduce search time by orders of magnitude and equal or outperform human analysis. One major lesson from our work is the need for formal proofs of correctness of move pruning methods.

This talk is aimed at a general computer science audience, it does not require any prior knowledge of state-space search or Artificial Intelligence.

About the speaker:

Professor Robert Holte of the Computing Science Department at the University of Alberta is a former editor-in-chief of "Machine Learning", a leading international journal, and co-founder and former director of the world-renowned Alberta Innovates Centre for Machine Learning (AICML). His current research is on single-agent heuristic search, primarily focusing on the use of automatic abstraction techniques to speed up search, methods for predicting the run-time of a search algorithm given a particular heuristic, and the use of machine learning to create heuristics. He has approximately 100 scientific papers to his credit, covering both pure and applied research, and has served on the steering committee or program committee of numerous major international Artificial Intelligence conferences. Professor Holte was elected a Fellow of the AAAI in 2011.

Thursday, April 10, 2014, 9:30AM



A mechanistic model of the development and function of the primary visual cortex

Jim Bednar   [homepage]

University of Edinburgh School of Informatics

Can the complex circuitry of the primary visual cortex (V1) be understood by considering the process by which it has been developed? In this talk I outline a long-term project to build the first computational model to explain both the development and the function of mammalian V1. To do this, researchers in my group are building the first developmental models with wiring consistent with V1, the first to have realistic behavior with respect to visual contrast, the first to include all of the various visual feature dimensions, and the first to include the major sources of connectivity that modulate V1 neuron responses. The goal is to have a comprehensive explanation for why V1 is wired as it is in the adult, how that circuitry reflects the the visual environment, and how the resulting architecture leads to the observed behavior of the neurons during visual tasks. This approach leads to experimentally testable predictions at each stage, and can also be applied to understanding other sensory cortices, such as somatosensory and auditory cortex, while suggesting computational principles that could be useful for processing real-world data in general.

About the speaker:

Jim Bednar leads the Computational Systems Neuroscience research group at the University of Edinburgh, focusing on modeling the development and function of mammalian visual systems. He is the Director and PI of the Edinburgh Doctoral Training Centre in Neuroinformatics and Computational Neuroscience, with 60 current PhD students. His Ph.D. in Computer Science is from the University of Texas at Austin, and he also has degrees in Philosophy and Electrical Engineering. He is a co-author of the monograph "Computational Maps in the Visual Cortex" (Springer, 2005), and leads the Topographica cortical modeling software project (see topographica.org). He is also a member of editorial boards for the journals Connection Science, Frontiers in Neuroinformatics, and Frontiers in Computational Neuroscience, and edits the Visual System section of the Springer Encyclopedia of Computational Neuroscience.

Wednesday, April 16, 2014, 3:00PM



The Evolution of Division of Labor and Specialization in Digital Organisms

Charles Ofria   [homepage]

Michigan State University

Many species of organisms succeed by forming groups and dividing important tasks among specialized members of those groups. The evolution of such division of labor strategies, and particularly reproductive division of labor, is a hallmark of major transitions in evolution. Additionally, understanding these evolutionary strategies holds promise for developing new algorithmic techniques for distributed computation. Studying how division of labor first arises in nature is challenging due to the huge timescales it takes for the process to play out. To overcome these challenges, we use populations of digital organisms that evolve in a natural and open-ended fashion. In this talk, I will describe how overhead costs associated with task-switching drive the evolution of highly collaborative strategies. Further, if there are negative repercussions for individuals who perform certain types of tasks (such as metabolic slowdowns or damage to their genomes) can find ways to balance these effects across the organisms or even make use of reproductive division of labor wherein some individuals sacrifice their own success for the good of the group. This latter strategy is akin to developmental patterns in multi-cellular organisms where most cells (called somatic cells) sacrifice their individual reproductive potential for the good of the group, while others (germ cells) protect their genetic material for use in founding future groups. I will also show that the evolution of somatic cells enables phenotypic strategies that are otherwise not easily accessible to undifferentiated organisms, though expression of these new phenotypic traits typically includes negative side effects such as aging.

About the speaker:

Dr. Charles Ofria is a Professor of Computer Science and Engineering at Michigan State University and is the deputy director of the BEACON Center for the Study of Evolution in Action, a multi-university NSF Science and Technology Center. His research lies at the intersection of Computer Science and Evolutionary Biology, developing a two-way flow of ideas between the fields. He received a bachelor’s degree in 1994 from SUNY Stony Brook with a triple major in Pure Math, Applied Math, and Computer Science. In 1999, he received a Ph.D. in Computation and Neural Systems from the California Institute of Technology, followed by a three-year postdoc in the Center for Microbial Ecology at MSU. Dr. Ofria is the architect of the Avida Digital Evolution Research Platform, which is downloaded over a thousand times per month for use in research and education at dozens of universities around the world.

Friday, April 18, 2014, 11:00AM



Computational Graphic Design: Colors, Fonts, and Layout

Aaron Hertzman   [homepage]

Adobe Research

I will present current research projects on understanding and creating graphic designs. Graphic design is a field with a very long history. There is a lot of expert knowledge and folklore involved in design, but little science. By using large, online datasets, design insights, and machine learning, we can study graphic design in a more rigorous way, and also create better tools for amateurs and professionals. I will describe some very initial efforts in these directions, in the areas of color compatibility, font selection, and single-page layout.

This is joint work with Peter O'Donovan, Jānis lībeks, and Aseem Agarwala.

About the speaker:

Aaron Hertzmann is a Senior Research Scientist at Adobe Systems. He received a BA in Computer Science and Art/Art History from Rice University in 1996, and a PhD in Computer Science from New York University in 2001, respectively. He was a Professor at University of Toronto for ten years, and has also worked at Pixar Animation Studios, University of Washington, Microsoft Research, Mitsubishi Electric Research Lab, Interval Research Corporation and NEC Research. He is an associate editor for ACM Transactions on Graphics. His awards include the MIT TR100 (2004), an Ontario Early Researcher Award (2005), a Sloan Foundation Fellowship (2006), a Microsoft New Faculty Fellowship (2006), the CACS/AIC Outstanding Young CS Researcher Award (2010), the Steacie Prize for Natural Sciences (2010), the Rice Outstanding Young Engineering Alumnus (2011), and the BMVC Best Science Paper Award (2013).

Tuesday, April 22, 2014, 11:00AM



Learning Parameterized Motor Skills

Bruno Castro da Silva   [homepage]

University of Massachusetts Amherst

Flexible skills are one of the fundamental building blocks required to design truly autonomous robots. When solving control problems, single policies can be learned but may fail if the tasks at hand vary or if the agent has to face novel, unknown contexts. Learning a single policy for each possible variation of a task or context is often infeasible. To address this problem we introduce a general framework for learning reusable, parameterized skills.

Parameterized skills are flexible behaviors that can produce---on demand---policies for any tasks drawn from a distribution of related control problems. Once acquired, they can be used to solve novel variations of a task, even those which the agent has never had direct experience with. They also allow for the construction of hierarchically structured policies and help the agent abstract away details of lower level controllers.

Previous work has shown that it is possible to transfer information between pairs of related control tasks and that parameterized policies can be constructed to deal with slight variations of a known domain. However, limited attention has been given to methods that allow an agent to autonomously synthesize general, parameterized skills from very few training samples.

We identify and solve three problems required to autonomously and efficiently learn parameterized skills. First, an agent observing or learning just a small number of task instances needs to be able to generalize such experiences and synthesize a single general, flexible skill. The skill should be capable of producing appropriate behaviors when invoked in novel contexts or applied to yet unseen variations of a task. Secondly, the agent must realize if suboptimal policies experienced while learning some task instance may nonetheless be useful for solving different---but possibly related---tasks; this allows seemingly unsuccessful policies to be used as additional training samples, thus accelerating the construction of the skill. Lastly, the agent must be capable of actively selecting which tasks it wishes to train next in order to more rapidly become competent in the skill. We evaluate our methods on a physical humanoid robot tasked with autonomously constructing a whole-body parameterized throwing skill from limited data.

About the speaker:

Bruno Castro da Silva is a Ph.D. candidate at the University of Massachusetts, working under the supervision of Prof. Andrew Barto. He received his B.S. in Computer Science from the Federal University of Rio Grande do Sul (Brazil) in 2004, and his MSc. from the same university in 2007. Bruno has worked, in different occasions from 2011 to 2013, as a visiting researcher at the Laboratory of Computational Embodied Neuroscience, in Rome, Italy, developing novel control algorithms for the iCub robot. His research interests lie in the intersection of machine learning, optimal control theory and robotics, and include the construction of reusable motor skills, active learning, efficient exploration of large state-spaces and Bayesian optimization applied to control.

Wednesday, April 30, 2014, 11:00AM



Advances in Active and Transfer Learning

Ian Davidson* and Jieping Ye**   [homepage]

*UC Davis; **Arizona State University

We will cover some of our recent work that makes learning more human-like by allowing machines to leverage knowledge from other domains and ask queries. The former is generally known as transfer learning and more specifically, domain adaptation, multi-task or multi-source learning whilst the later is known as active learning. The contribution of our work is rigorous formulations of transfer learning and active methods that involve asking `easy' (non-label focused) queries. The talk will focus on learning methods, but we will motivate and demonstrate their use on analysis of medical images. Finally, we will sketch some directions on how to use active and transfer learning together by attempting to fix transfer failure via active learning.

Friday, May 2, 2014, 11:00AM



Imitation vs innovation in children's language learning

Colin Bannard   [homepage]

UT Austin

This talk is concerned with children's use of imitation as a strategy in language learning. I will describe a range of different studies, all of which are concerned in some way with understanding when children will choose to imitate language they have heard others use, and when they will choose to be selective or creative in their productions. I will explore the utility of imitation in view of the statistics of the language children hear and the nature of their social environments. I will present simple corpus-derived statistical models that allow us to predict when children will imitate and when they will innovate, and will describe how such models might explain aspects of pragmatic and grammatical development.

About the speaker:

Colin Bannard is Assistant Professor in the Linguistics Department at UT Austin, where he runs the Child Language Lab. He received his PhD from the School of Informatics at the University of Edinburgh for work in Computational and Psycho Linguistics. He CL work was in the areas of Statistical Machine Translation and Lexical Acquisition. He then did a postdoc in the department of Psychology at the Max Planck Institute for Evolutionary Anthropology in Leipzig, where his work switched to focus exclusively on child language acquisition. His interests include children's statistical learning, the relative contributions of habitual and intentional process in language use, and individual differences in how children turn the finite sample of language they hear into a productive system for communication. Much of his work focuses on using corpora of child-directed speech to make predictions about early language production, which are then tested in the lab.

Tuesday, June 3, 2014, 11:00AM



Social Understanding and Planning: Two Challenges for Cognitive Systems

Pat Langley   [homepage]

University of Auckland / Carnegie Mellon Silicon Valley

In this talk, I review the cognitive systems paradigm, which adopts the original aim of AI to reproduce the full range of human intelligence. After this, I argue that social cognition, in particular the ability to reason about others' mental states, is a natural target for cognitive systems research. I report progress on two problems - social understanding and social planning - that depend upon this capacity. The first involves a variant of plan understanding in which the viewer must infer the goals and beliefs of interacting agents from their actions, including their models of each others' goals and beliefs. The second involves generating plans that achieve goals by manipulating the mental states of others. I describe systems that address these tasks by separating domain-level expertise from more general knowledge about social interactions and applying this content at nested levels of belief. I present results on a set of fable-like scenarios that involve reasoning about ignorance and intentional deception, along with similar results on high-level aspects of task-oriented dialogue. In closing, I discuss plans for future research in these areas.

About the speaker:

Dr. Pat Langley serves as Professor of Computer Science at the University of Auckland and as Distinguished Scientist at Carnegie Mellon University's Silicon Valley Campus. He has contributed to artificial intelligence and cognitive science for more than 30 years, having published over 200 papers and five books on these topics. Professor Langley developed some of the earliest computational approaches to scientific knowledge discovery, and he was an early champion of both experimental studies of machine learning and its application to real-world problems. Dr. Langley is the founding editor of two journals, Machine Learning in 1986 and Advances in Cognitive Systems in 2012, and he is a Fellow of both AAAI and the Cognitive Science Society. His current research focuses on the construction of explanatory scientific models and architectures for complex social cognition.

Tuesday, June 10, 2014, 2:00PM



Creating and Using Linked Knowledge

Craig A. Knoblock   [homepage]

University of Southern California

Companies, such as Google and Microsoft, are building web-scale linked knowledge bases for the purpose of indexing and searching the Web, but these efforts do not address the problem of building accurate, fine-grained, deep knowledge bases for specific application domains. We are developing an integration framework, called Karma, which supports the rapid, end-to-end construction of such linked knowledge bases. In this talk I will describe machine-learning techniques for mapping new data sources to a domain model and present an application of this technology to build a virtual museum of American art.

About the speaker:

Craig Knoblock is a Research Professor of Computer Science and Spatial Sciences at the University of Southern California (USC) and the Director of Information Integration at the USC Information Sciences Institute. He received his Ph.D. in computer science from Carnegie Mellon. His research focuses on techniques related to information integration, Semantic Web and Linked Data. He has published more than 250 journal articles, book chapters, and conference papers. Dr. Knoblock is a AAAI Fellow, a Distinguished Scientist of the ACM, and past President and Trustee of IJCAI. He was selected for the 2014 Robert S. Engelmore Award for his contributions to applied AI. He and his co-authors were recognized for the Best Research Paper at ISWC 2012 on Discovering Concept Coverings in Ontologies of Linked Data Sources and the Best In-Use Paper at ESWC 2013 on Connecting the Smithsonian American Art Museum to the Linked Data Cloud

[ FAI Archives ]

Fall 2022 - Spring 2023

Fall 2021 - Spring 2022

Fall 2020 - Spring 2021

Fall 2019 - Spring 2020

Fall 2018 - Spring 2019

Fall 2017 - Spring 2018

Fall 2016 - Spring 2017

Fall 2015 - Spring 2016

Fall 2014 - Spring 2015

Fall 2013 - Spring 2014

Fall 2012 - Spring 2013

Fall 2011 - Spring 2012

Fall 2010 - Spring 2011

Fall 2009 - Spring 2010

Fall 2008 - Spring 2009

Fall 2007 - Spring 2008

Fall 2006 - Spring 2007

Fall 2005 - Spring 2006

Spring 2005

Fall 2004

Spring 2004

Fall 2003

Spring 2003

Fall 2002

Spring 2002

Fall 2001

Spring 2001

Fall 2000

Spring 2000