Forum for Artificial Intelligence

Archive


Forum for Artificial Intelligence

[ Home   |   About FAI   |   Upcoming talks   |   Past talks ]



This website is the archive for past Forum for Artificial Intelligence talks. Please click this link to navigate to the list of current talks.

FAI meets every other week (or so) to discuss scientific, philosophical, and cultural issues in artificial intelligence. Both technical research topics and broader inter-disciplinary aspects of AI are covered, and all are welcome to attend!

If you would like to be added to the FAI mailing list, subscribe here. If you have any questions or comments, please send email to Catherine Andersson.






[ Upcoming talks ]





Fri, August 19
3:00PM
Vibhav Gogate
University of Texas at Dallas
Approximate Counting and Lifting for Scalable Inference and Learning in Markov Logic
Mon, August 29
11:00AM
Subramanian Ramamoorthy
University of Edinburgh
Representations and Models for Collaboratively Intelligent Robots
Fri, September 2
11:00AM
Kory Mathewson
University of Alberta
Developing Machine Intelligence to Improve Bionic Limb Control
Fri, September 9
11:00AM
Brenna Argall
Northwestern University
Turning Assistive Machines into Assistive Robots
Fri, September 30
11:00AM
Peter Stone
University of Texas at Austin
Artificial Intelligence and Life in 2030
[VIDEO OF TALK]
Tue, October 11
11:00AM
Vivek Srikumar
University of Utah
A Tale of Two Activations
Fri, October 14
11:00AM
Junyi Jessy Li
University of Pennsylvania
Text Specificity: How and Why
Fri, October 21
11:00AM
Jia Deng
University of Michigan
Going Deeper in Semantics and Mid-Level Vision
Mon, October 31
11:00AM
Ido Dagan
Bar Ilan University
Natural Language Knowledge Graphs
Wed, November 2
11:00AM
Marcus Rohrbach
UC Berkeley
Explain and Answer: Intelligent systems which can communicate about what they see
Fri, November 4
11:00AM
Ashish Kapoor
Microsoft Research
Safe Decision Making under Uncertainty
Fri, November 18
11:00AM
John Schulman
OpenAI
Optimizing Expectations: From Deep Reinforcement Learning to Stochastic Computation Graphs
Fri, December 2
11:00AM
Sam Bowman
New York University
Learning neural networks for sentence understanding with the Stanford NLI corpus
Mon, December 5
11:00AM
Torsten Schaub
University of Potsdam and INRIA Rennes
From SAT to ASP and Back
Mon, January 16
11:00AM
Eugene Vorobeychik
Vanderbilt University
Machine Learning Under Attack
Fri, February 17
11:00AM
Angel Xuan Chang
Princeton University
Learning spatial priors for text to 3D scene generation
Fri, March 24
11:00AM
Benjamin Kuipers
University of Michigan
How Can We Trust a Robot?
Wed, March 29
10:30AM
Barbara Grosz
Harvard University
From the Turing Test to Smart Partners: "Is Your System Smart Enough To Work With Us?"
Fri, April 7
11:00AM
Aviv Tamar
University of California at Berkeley
Generalization and Safety in Reinforcement Learning and Control
Fri, May 5
11:00AM
Amir H. Gandomi
Michigan State University
Evolutionary (Big) Data Mining and Optimization
Fri, June 9
11:00AM
Roger B. Dannenberg
Carnegie Mellon University
The Music of Robots: Music Automata from Pythagoras to the Future
Fri, July 21
11:00AM
Magnus Egerstedt
Georgia Institute of Technology
Coordinated Control of Multi-Robot Systems for Persistent Environmental Monitoring
Tue, July 25
11:00AM
Yolanda Gil
USC Information Sciences Institute
Combining Human Ingenuity with Machine Systematicity for Data Science: Towards Artificial Intelligence Research Assistants

Friday, August 19, 2016, 3:00PM



Approximate Counting and Lifting for Scalable Inference and Learning in Markov Logic

Vibhav Gogate   [homepage]

University of Texas at Dallas

Markov logic networks (MLNs) combine the relational representation power of first-order logic with uncertainty representation power of probability. They often yield a compact representation, and as a result are routinely used in a wide variety of application domains such as natural language understanding, computer vision and Bio-informatics, for representing background knowledge. However, scaling up inference and learning in them is notoriously difficult, which limits their wide applicability. In this talk, I will describe two complementary approaches, one based on approximate counting and the second based on approximate lifting, for scaling up inference and learning in MLNs. The two approaches help remedy the following issue that adversely affects scalability: each first-order formula typically yields tens of millions of ground formulas and even algorithms that are linear in the number of ground formulas are computationally infeasible. The approximate counting approaches are linear in the number of ground atoms (random variables), which can be much smaller than the number of ground formulas (features), while the approximate lifting approaches substantially reduce the number of ground atoms, further reducing the complexity. I will present theoretical guarantees as well as experimental results, clearly demonstrating the power of our new approaches. (Joint work with Parag Singla, Deepak Venugopal, David Smith, Tuan Pham, and Somdeb Sarkhel).

About the speaker:

Vibhav Gogate is an Assistant Professor in the Computer Science Department at the University of Texas at Dallas. He got his Ph.D. at University of California, Irvine in 2009 and then did a two-year post-doc at University of Washington. His broad research interests are in artificial intelligence, machine learning and data mining. His ongoing focus is on probabilistic graphical models, their first-order logic based extensions such as Markov logic, and probabilistic programming. He is the co-winner of the 2010 UAI approximate probabilistic inference challenge and the 2012 PASCAL probabilistic inference competition.

Monday, August 29, 2016, 11:00AM



Representations and Models for Collaboratively Intelligent Robots

Subramanian Ramamoorthy   [homepage]

University of Edinburgh

We are motivated by the problem of building autonomous robots that are able to work collaboratively with other agents, such as human co-workers. One key attribute of such an autonomous system is the ability to make predictions about the actions and intentions of other agents in a dynamic environment - both to interpret the activity context as it is being played out and to adapt actions in response to that contextual information.

Drawing on examples from robotic systems we have developed in my lab, including mobile robots that can navigate effectively in crowded spaces and humanoid robots that can cooperate in assembly tasks, I will present recent results addressing the questions of how to efficiently capture the hierarchical nature of activities, and how to rapidly estimate latent factors, such as hidden goals and intent.

Firstly, I will describe a procedure for topological trajectory classification, using the concept of persistent homology, which enables unsupervised extraction of certain kinds of relational concepts in motion data. One use of this representation is in devising a multi-scale version of Bayesian recursive estimation, which is a step towards reliably grounding human instructions in the realized activity.

Finally, I will describe work with a human-robot interface based on the use of mobile 3D eye tracking as a signal for intention inference. We achieve this by learning a probabilistic generative model of fixations conditioned on the task that the person is executing. Intention inference is achieved through inversion of this model, fixations depending on the location of objects or regions of interest in the environment. Using preliminary experimental results, I will discuss how this approach is useful in the grounding of plan symbols to their representation in the environment.

About the speaker:

Dr. Subramanian Ramamoorthy is a Reader (Associate Professor) in the School of Informatics, University of Edinburgh, where he has been on the faculty since 2007. He is a Coordinator of the EPSRC Robotarium Research Facility, and Executive Committee Member for the Centre for Doctoral Training in Robotics and Autonomous Systems. He received his PhD in Electrical and Computer Engineering from The University of Texas at Austin in 2007. He is an elected Member of the Young Academy of Scotland at the Royal Society of Edinburgh.

His research focus has been on robot learning and decision-making under uncertainty, with emphasis on problems involving human-robot and multi-robot collaborative activities. These problems are solved using a combination machine learning techniques with emphasis on issues of transfer, online and reinforcement learning as well as new representations and analysis techniques based on geometric/topological abstractions.

His work has been recognised by nominations for Best Paper Awards at major international conferences - ICRA 2008, IROS 2010, ICDL 2012 and EACL 2014. He serves in editorial and programme committee roles for conferences and journals in the areas of AI and Robotics. He leads Team Edinferno, the first UK entry in the Standard Platform League at the RoboCup International Competition. This work has received media coverage, including by BBC News and The Telegraph, and has resulted in many public engagement activities, such as at the Royal Society Summer Science Exhibition, Edinburgh International Science festival and Edinburgh Festival Fringe.

Before joining the School of Informatics, he was a Staff Engineer with National Instruments Corp., where he contributed to five products in the areas of motion control, computer vision and dynamic simulation. This work resulted in seven US patents and numerous industry awards for product innovation.

Watch Online

Friday, September 2, 2016, 11:00AM



Developing Machine Intelligence to Improve Bionic Limb Control

Kory Mathewson   [homepage]

University of Alberta

Prosthetic limbs are artificial devices which serve as a replacement for missing and/or lost body parts. The first documented history of prosthetics is in the Rigveda, a Hindu text written over 3000 years ago. Advances in artificial limb hardware and interface technology have facilitated some improvements in functionality restoration, but upper limb amputation remains a difficult challenge for prosthetic replacement. Many prosthetic users reject the use of prosthetic limbs due to control system issues, lack of natural feedback, and functional limitations.

We propose the use of new high-performance computer algorithms, specifically real-time artificial intelligence (AI), to address these limitations by improving a bionic limb with real-time AI. Using this AI, the limb can make predictions about the future and can share control with the user. The limb could remember task specific action sequences relevant in certain environments (think of playing piano).

The integration of AI in a prosthetic limb is paradigm shifting. Prosthetic limbs understand very little about the environment or the user. Our work changes provides rich information to the limb in a usable, safe, stable, and reliable way. Improved two-way communication between the human and the device is a major step toward prosthetic embodiment. This allows for learning from ongoing interaction with the user, providing a personalized experience.

This work is a collaboration between the Bionic Limbs for Improved Natural Control lab and the Reinforcement Learning and Artificial Intelligence lab at the University of Alberta, in Edmonton, Canada.

About the speaker:

Kory Mathewson is currently an intern on the Twitter Cortex team working in San Francisco, California. His passions lay at the interface between humans and other intelligent systems. He is completing a PhD at the University of Alberta under the supervision of Dr. Richard Sutton and Dr. Patrick Pilarski. In this work, he is progressing interactive machine learning algorithms for deployment on robotic platforms and big data personalization systems. He also holds a Master's degree in Biomedical Engineering and a Bachelors degree in Electrical Engineering. To find out more, visit http://korymathewson.com.

Friday, September 9, 2016, 11:00AM



Turning Assistive Machines into Assistive Robots

Brenna Argall   [homepage]

Northwestern University

It is a paradox that often the more severe a person's motor impairment, the more challenging it is for them to operate the very assistive machines which might enhance their quality of life. A primary aim of my lab is to address this confound by incorporating robotics autonomy and intelligence into assistive machines---to offload some of the control burden from the user. Robots already synthetically sense, act in and reason about the world, and these technologies can be leveraged to help bridge the gap left by sensory, motor or cognitive impairments in the users of assistive machines. However, here the human-robot team is a very particular one: the robot is physically supporting or attached to the human, replacing or enhancing lost or diminished function. In this case getting the allocation of control between the human and robot right is absolutely essential, and will be critical for the adoption of physically assistive robots within larger society. This talk will overview some of the ongoing projects and studies in my lab, whose research lies at the intersection of artificial intelligence, rehabilitation robotics and machine learning. We are working with a range of hardware platforms, including smart wheelchairs and assistive robotic arms. A distinguishing theme present within many of our projects is that the machine automation is customizable---to a user's unique and changing physical abilities, personal preferences or even financial means.

About the speaker:

Brenna Argall is the June and Donald Brewer Junior Professor of Electrical Engineering & Computer Science at Northwestern University, and also an assistant professor in the Department of Mechanical Engineering and the Department of Physical Medicine & Rehabilitation. Her research lies at the intersection of robotics, machine learning and human rehabilitation. She is director of the assistive & rehabilitation robotics laboratory (argallab) at the Rehabilitation Institute of Chicago (RIC), the premier rehabilitation hospital in the United States, and her lab's mission is to advance human ability through robotics autonomy. Argall is a 2016 recipient of the NSF CAREER award. Her Ph.D. in Robotics (2009) was received from the Robotics Institute at Carnegie Mellon University, as well as her M.S. in Robotics (2006) and B.S. in Mathematics (2002). Prior to joining Northwestern, she was a postdoctoral fellow (2009-2011) at the École Polytechnique Fédérale de Lausanne (EPFL), and prior to graduate school she held a Computational Biology position at the National Institutes of Health (NIH).

Friday, September 30, 2016, 11:00AM



Artificial Intelligence and Life in 2030
[VIDEO OF TALK]

Peter Stone   [homepage]

University of Texas at Austin

The One Hundred Year Study on Artificial Intelligence, launched in the fall of 2014, is a long-term investigation of the field of Artificial Intelligence (AI) and its influences on people, their communities, and society. As its core activity, the Standing Committee that oversees the One Hundred Year Study forms a Study Panel every five years to assess the current state of AI. The first Study Panel report, published in September 2016, focusses on eight domains the panelists considered to be most salient: transportation; service robots; healthcare; education; low-resource communities; public safety and security; employment and workplace; and entertainment. In each of these domains, the report both reflects on progress in the past fifteen years and anticipates developments in the coming fifteen years. The report also includes recommendations concerning AI-related policy.

This talk by the Study Panel Chair, will briefly describe the process of creating the report and summarize its contents. The floor will then be opened for questions and discussion.

Attendees are strongly encouraged to read at least the executive summary, overview, and callouts (in the margins) of the report before the session: https://ai100.stanford.edu/2016-report

About the speaker:

Dr. Peter Stone is the David Bruton, Jr. Centennial Professor and Associate Chair of Computer Science, as well as Chair of the Robotics Portfolio Program, at the University of Texas at Austin. In 2013 he was awarded the University of Texas System Regents' Outstanding Teaching Award and in 2014 he was inducted into the UT Austin Academy of Distinguished Teachers, earning him the title of University Distinguished Teaching Professor. Professor Stone's research interests in Artificial Intelligence include machine learning (especially reinforcement learning), multiagent systems, robotics, and e-commerce. Professor Stone received his Ph.D in Computer Science in 1998 from Carnegie Mellon University. From 1999 to 2002 he was a Senior Technical Staff Member in the Artificial Intelligence Principles Research Department at AT&T Labs - Research. He is an Alfred P. Sloan Research Fellow, Guggenheim Fellow, AAAI Fellow, Fulbright Scholar, and 2004 ONR Young Investigator. In 2003, he won an NSF CAREER award for his proposed long term research on learning agents in dynamic, collaborative, and adversarial multiagent environments, in 2007 he received the prestigious IJCAI Computers and Thought Award, given biannually to the top AI researcher under the age of 35, and in 2016 he was awarded the ACM/SIGAI Autonomous Agents Research Award.

Tuesday, October 11, 2016, 11:00AM



A Tale of Two Activations

Vivek Srikumar   [homepage]

University of Utah

Various factors contribute to the empirical successes of neural networks in recent years: for example, the availability of more data, better understanding of the algorithmic aspects of learning and optimization, and novel network architectures and activation functions. The choice of the activation function is an important design consideration because it changes the expressive capacity of the network. But what functions do the various activation functions represent?

In this talk, I will focus on the representative power of two activation functions -- rectified linear units (ReLUs) and cosine neurons. While ReLUs were originally introduced as a means of easing optimization concerns, they have also led to empirical improvements in predictive accuracy across different tasks. As an explanation for this improvement, I will show that ReLU networks can compactly represent decision surfaces that would require exponentially larger threshold networks. Then, we will move to the less popular cosine neuron which is intimately connected to shift-invariant kernels. I will present a new analysis of the cosine neuron that not only quantifies its expressive capacity, but also naturally leads to regularization techniques that are formally justified. I will end this talk with a set of open research questions about connecting these formal results to empirical observations.

About the speaker:

Vivek Srikumar is an assistant professor in the School of Computing at the University of Utah. Previously, he obtained his Ph.D. from the University of Illinois at Urbana-Champaign and was a post-doc at Stanford University . His research lies in the intersection of machine learning and natural language processing and is largely motivated by problems involving text understanding. In particular, he is interested in research questions related to developing semantic representations of text, learning discrete and real valued representations of textual inputs using little or incidental supervision, and efficiently predicting these representations. His work has been published at various NLP and machine learning venues and recently received the best paper award at EMNLP.

Friday, October 14, 2016, 11:00AM



Text Specificity: How and Why

Junyi Jessy Li   [homepage]

University of Pennsylvania

Specificity is an important characterization of texts potentially beneficial for a range of applications such as automatic summarization, text simplification and sentiment analysis. Yet how do we assess specificity? How do terms lack in specificity connect with the full text discourse? In this talk I present our work that links sentence specificity with the discourse relation "instantiation". First I will present SPECITELLER, a fast and effective semi-supervised system for predicting sentence specificity. Crucially the system relies on bootstrapping lexical usage of the "instantiation" relation. I will further discuss insights into different types of the lack of specificity in a sentence. In the second part of the talk, I will present our improved prediction of "instantiation" and its link with specificity.

About the speaker:

J. Jessy Li is a PhD candidate working with Prof. Ani Nenkova at the University of Pennsylvania. Her research interests are in natural language processing with focus on discourse understanding. Her PhD work has explored monolingual and cross-lingual discourse parsing, text intelligiblity with respect to discourse and characterizing text specificity using discourse structure. She is a recipient of the 2016 AAAI/SIGAI Doctoral Consortium Scholarship and one of her papers was nominated for the best paper award at SIGIDAL 2016. She did internships at Yahoo Research (2015) and Microsoft Research (2013). Webpage: http://www.seas.upenn.edu/~ljunyi/

Friday, October 21, 2016, 11:00AM



Going Deeper in Semantics and Mid-Level Vision

Jia Deng   [homepage]

University of Michigan

Achieving human-level visual understanding requires extracting deeper semantics from images. In particular, it entails moving beyond detecting objects to understanding the relations between them. It also demands progress in mid-level vision, which extracts deeper geometric information such as pose and 3D. In this talk I will present recent work on both fronts. I will describe efforts on recognizing human-object interactions, an important type of relations between visual entities. I will present a state-of-the-art method on human pose estimation. Finally, I will discuss recovering 3D from a single image, a fundamental mid-level vision problem.

About the speaker:

Jia Deng is an Assistant Professor of Computer Science and Engineering at the University of Michigan. His research focus is on computer vision and machine learning, in particular, achieving human-level visual understanding by integrating perception, cognition, and learning. He received his Ph.D. from Princeton University and his B.Eng. from Tsinghua University, both in computer science. He is a recipient of the Yahoo ACE Award, a Google Faculty Research Award, the ICCV Marr Prize, and the ECCV Best Paper Award.

Monday, October 31, 2016, 11:00AM



Natural Language Knowledge Graphs

Ido Dagan   [homepage]

Bar Ilan University

How can we capture the information expressed in large amounts of text? And how can we allow people, as well as computer applications, to easily explore it? When comparing textual knowledge to formal knowledge representation (KR) paradigms, two prominent differences arise. First, typical KR paradigms rely on pre-specified vocabularies, which are limited in their scope, while natural language is inherently open. Second, in a formal knowledge base each fact is encoded in a single canonical manner, while in multiple texts a fact may be repeated with some redundant, complementary or even contradictory information.

In this talk, I will outline a new research direction, which we term Natural Language Knowledge Graphs (NLKG), that aims to represent textual information in a consolidated manner, based on the available natural language vocabulary and structure. I will first suggest some plausible requirements that such graphs should satisfy, that would allow effective communication of the encoded knowledge. Then, I will describe our current specification for NLKG structure, motivated by a use case of representing multiple tweets describing an event. Our structure merges co-referring individual proposition extractions, created in an Open-IE flavor, into a representation of consolidated entities and propositions, adapting the spirit of formal knowledge graphs. Different language expressions, denoting entities, arguments and propositions, are organized into entailment graphs, which allow tracing the inference relationships and redundancies between them. I will also illustrate the potential application of NLKGs for text exploration.

About the speaker:

Ido Dagan is a Professor at the Department of Computer Science at Bar-Ilan University, Israel and a Fellow of the Association for Computational Linguistics (ACL). His interests are in applied semantic processing, focusing on textual inference and natural-language based knowledge representation and acquisition. Dagan and colleagues established the textual entailment recognition paradigm. He was the President of the ACL in 2010 and served on its Executive Committee during 2008-2011. In that capacity, he led the establishment of the journal Transactions of the Association for Computational Linguistics. Dagan received his B.A. summa cum laude and his Ph.D. (1992) in Computer Science from the Technion. He was a research fellow at the IBM Haifa Scientific Center (1991) and a Member of Technical Staff at AT&T Bell Laboratories (1992-1994). During 1998-2003 he was co-founder and CTO of FocusEngine and VP of Technology of LingoMotors.

Wednesday, November 2, 2016, 11:00AM



Explain and Answer: Intelligent systems which can communicate about what they see

Marcus Rohrbach   [homepage]

UC Berkeley

Language is the most important channel for humans to communicate about what they see. To allow an intelligent system to effectively communicate with humans it is thus important to enable it to relate information in words and sentences with the visual world. One component in a successful communication is the ability to answer natural language questions about the visual world. A second component is the ability of the system to explain in natural language, why it gave a certain answer, allowing a human to trust and understand it. In my talk, I will show how we can build models which answer questions but at the same time are modular and expose their semantic reasoning structure. To explain the answer with natural language, I will discuss how we can learn to generate explanations given only image captions as training data by introducing a discriminative loss and using reinforcement learning.

About the speaker:

In his research Marcus Rohrbach focuses on relating visual recognition and natural language understanding with machine learning. Currently he is a Post-Doc with Trevor Darrell at UC Berkeley. He and his collaborators received the NAACL 2016 best paper award for their work on Neural Module Networks and won the Visual Question Answering Challenge 2016. During his PhD he worked at the Max Planck Institute for Informatics, Germany, with Bernt Schiele and Manfred Pinkal. He completed it in 2014 with summa cum laude at Saarland University and received the DAGM MVTec Dissertation Award 2015 from the German Pattern Recognition Society for it. His BSc and MSc degree in Computer Science are from the University of Technology Darmstadt, Germany (2006 and 2009). After his BSc, he spent one year at the University of British Columbia, Canada, as visiting graduate student.

Friday, November 4, 2016, 11:00AM



Safe Decision Making under Uncertainty

Ashish Kapoor   [homepage]

Microsoft Research

Machine Learning is one of the key component that enables systems that operate under uncertainty. For example, robotic systems might employ sensors together with a machine learned system to identify obstacles. However, such data driven system are far from perfect and can result in failure cases that can jeopardize safety. In this talk we will explore a framework that aims to preserve safety invariants despite the uncertainties in the environment arising due to incomplete information. We will first describe a method to reason about safe plans and control strategies despite perceiving the world through noisy sensors and machine learning systems. At the heart of our approach is the new Probabilistic Signal Temporal Logic (PrSTL), an expressive language to define stochastic properties, and enforce probabilistic guarantees on them. Next, we will consider extensions of these ideas to a sequential decision making framework that considers the trade-off in risk and reward in a near-optimal manner. We will demonstrate our approach by deriving safe plans and controls for quadrotors and autonomous vehicles in dynamic environments.

About the speaker:

Ashish Kapoor is a senior researcher at Microsoft Research, Redmond. Currently, his research focuses on Aerial Informatics and Robotics with an emphasis on building intelligent and autonomous flying agents that are safe and enable applications that can positively influence our society. The research builds upon cutting edge research in machine intelligence, robotics and human-centered computation in order to enable an entire fleet of flying robots that range from micro-UAVs to commercial jetliners. Various applications scenarios include Weather Sensing, Monitoring for Precision Agriculture, Safe Cyber-Physical Systems etc. Ashish received his PhD from MIT Media Laboratory in 2006.

Friday, November 18, 2016, 11:00AM



Optimizing Expectations: From Deep Reinforcement Learning to Stochastic Computation Graphs

John Schulman   [homepage]

OpenAI

Reinforcement learning can be viewed as an optimization problem: maximize the expected total reward with respect to the parameters of the policy. My talk will describe recent work on making policy gradient methods more sample-efficient and reliable, especially when used with expressive nonlinear function approximators such as neural networks. Then I will also describe how similar techniques for gradient estimation can be used in other machine learning problems, such as variational inference, using the formalism of stochastic computation graphs.

About the speaker:

John is a research scientist at OpenAI. Previously he was in the computer science PhD program at UC Berkeley, and before that he studied physics at Caltech. His research focuses on reinforcement learning. He previously performed research in (and is still interested in) neuroscience. Outside of work, he enjoys reading, running, and listening to jazz music.

Friday, December 2, 2016, 11:00AM



Learning neural networks for sentence understanding with the Stanford NLI corpus

Sam Bowman   [homepage]

New York University

In this two-part talk, I’ll first introduce the Stanford Natural Language Inference corpus (SNLI, EMNLP ‘15), then present the Stack-Augmented Parser-Interpreter NN (SPINN, ACL ‘16), a model developed on that corpus.

SNLI is a human-annotated corpus for training and evaluating machine learning models on natural language inference, the task of judging the truth of one sentence conditioned on the truth of another. Natural language inference is a particularly effective way to evaluate machine learning models for sentence understanding, and SNLI’s large size (570k sentence pairs) makes it newly possible to evaluate low-bias models like neural networks in this setting. I discuss our novel methods for data collection, discuss the quality of the corpus, and present some results from other research groups that have used the corpus.

SPINN is a neural network model for sentence encoding that builds on past work on tree-structured neural networks (e.g. Socher et al. ‘11, Tai et al ‘15). It re-implements the core operations of those networks in a way that improves upon them in three ways: It improves the quality of the resulting sentence representations by combining sequence- and tree-based approaches to semantic composition, it makes it possible to run the model without an external parser, and it enables the use of minibatching and GPU computation for the first time, yielding speedups of up to 25× and making tree-based models competitive in speed with simple RNNs for the first time.

About the speaker:

Sam Bowman recently started as an assistant professor at New York University, appointed in the Department of Linguistics and the Center for Data Science. He recently completed a PhD in the Department of Linguistics at Stanford University as a member of the Stanford Natural Language Processing Group. Sam's research is focused on building artificial neural network models for solving large-scale language understanding problems within natural language processing, and in using those models to learn about the human capacity for language understanding.

Monday, December 5, 2016, 11:00AM



From SAT to ASP and Back

Torsten Schaub   [homepage]

University of Potsdam and INRIA Rennes

Answer Set Programming (ASP) provides an approach to declarative problem solving that combines a rich yet simple modeling language with effective Boolean constraint solving capacities. This makes ASP a model, ground, and solve paradigm, in which a problem is expressed as a set of first-order rules, which are subsequently turned into a propositional format by systematically replacing all variables, before finally the models of the resulting propositional rules are computed. ASP is particularly suited for modeling problems in the area of Knowledge Representation and Reasoning involving incomplete, inconsistent, and changing information due to its nonmonotonic semantic foundations.

From a formal perspective, ASP allows for solving all search problems in NP (and NP^NP) in a uniform way, that is, by separating problem encodings and instances. Hence, ASP is well-suited for solving hard combinatorial search (and optimization) problems. Interesting applications of ASP include decision support systems for NASA shuttle controllers, industrial team-building, music composition, natural language processing, robotics, and many more. The ASP solver CLASP has seen more than 45000 downloads world-wide since its inception at the end of 2008.

The talk will start with a gentle introduction to ASP, while focusing on the commonalities and differences to SAT. It will discuss the different semantic foundations and describe the impact of a modelling language along with off-the-shelf grounding systems. Finally, it will highlight some resulting techniques, like meta-programming, preference handling, heuristic constructs, and theory reasoning.

About the speaker:

Torsten Schaub received his diploma and dissertation in informatics in 1990 and 1992, respectively, from the Technical University of Darmstadt, Germany, and his habilitation in informatics in 1995 from the University of Rennes I, France. From 1990 to 1993 he was a research assistant at the Technical University at Darmstadt. From 1993 to 1995, he was a research associate at IRISA/INRIA at Rennes. In 1995 he became University Professor at the University of Angers. Since 1997, he is University Professor for knowledge processing and information systems at the University of Potsdam. In 1999, he became Adjunct Professor at the School of Computing Science at Simon Fraser University, Canada; and since 2006 he is also an Adjunct Professor in the Institute for Integrated and Intelligent Systems at Griffith University, Australia. Since 2014, Torsten Schaub holds an Inria International Chair at Inria Rennes - Bretagne Atlantique. Torsten Schaub has become a fellow of ECCAI in 2012. In 2014 he was elected President of the Association of Logic Programming. He served as program (co-)chair of LPNMR'09, ICLP'10, and ECAI'14. The research interests of Torsten Schaub range from the theoretic foundations to the practical implementation of reasoning from incomplete, inconsistent, and evolving information. His current research focus lies on Answer set programming and materializes at potassco.sourceforge.net, the home of the open source project Potassco bundling software for Answer Set Programming developed at the University of Potsdam.

Watch Online

Monday, January 16, 2017, 11:00AM



Machine Learning Under Attack

Eugene Vorobeychik   [homepage]

Vanderbilt University

The success of machine learning, particularly in supervised settings, has led to numerous attempts to apply it in adversarial settings such as spam and malware detection. The core challenge in this class of applications is that adversaries are not static data generators, but make a deliberate effort to either evade the classifiers deployed to detect them, or degrade the quality of the data used to train the classifiers. I will discuss our recent research into the problem of adversarial classifier evasion, considering both the problem of adversarial (threat) modeling, and the defender's problem of hardening classifiers against evasion attacks. Finally, I will describe our use of similar approaches for privacy-preserving data sharing.

About the speaker:

Yevgeniy Vorobeychik is an Assistant Professor of Computer Science, Computer Engineering, and Biomedical Informatics and Vanderbilt University. Previously, he was a Principal Member of Technical Staff at Sandia National Laboratories. Between 2008 and 2010 he was a post-doctoral research associate at the University of Pennsylvania Computer and Information Science department. He received Ph.D. (2008) and M.S.E. (2004) degrees in Computer Science and Engineering from the University of Michigan, and a B.S. degree in Computer Engineering from Northwestern University. His work focuses on adversarial reasoning in AI, computational game theory, security and privacy, network science, and agent-based modeling. Dr. Vorobeychik has published over 100 research articles on these topics. He was nominated for the 2008 ACM Doctoral Dissertation Award and received honorable mention for the 2008 IFAAMAS Distinguished Dissertation Award, and was an invited early career spotlight speaker at IJCAI 2016.

Friday, February 17, 2017, 11:00AM



Learning spatial priors for text to 3D scene generation

Angel Xuan Chang   [homepage]

Princeton University

The ability to form a visual interpretation of the world from natural language is pivotal to human communication. Being able to map descriptions of scenes to 3D geometric representations can be useful in many applications such as robotics and conversational assistants. In this talk, I will present the task of text to 3D scene generation, where a scene description in natural language is automatically converted into a plausible 3D scene interpretation. For example, the sentence "a living room with a red couch and TV" should generate a realistic living room arrangement with the TV in front of the couch and supported by a TV stand. This task lies at the intersection of NLP and computer graphics, and requires techniques from both.

A key challenge in this task is that the space of geometric interpretations is large while natural language text is typically under-specified, omitting shared, common-sense facts about the world. I will describe how we can learn a set of spatial priors from virtual environments, and use them to infer plausible arrangements of objects given a natural language description. I will show that a parallel corpus of virtual 3D scenes and natural language descriptions can be leveraged to extract likely couplings between references and concrete 3D objects (e.g., an "L-shaped red couch", and the virtual geometric representation of that object). Finally, I will discuss recent work in establishing large scale datasets for indoor 3D scene understanding.

About the speaker:

Angel Chang received her PhD after working in the Stanford NLP group where she was advised by Chris Manning. Her research focuses on the intersection of natural language understanding, computer graphics, and AI. She is currently a postdoctoral researcher with Tom Funkhouser at the Princeton graphics and vision group. More details at http://angelxuanchang.github.io

Friday, March 24, 2017, 11:00AM



How Can We Trust a Robot?

Benjamin Kuipers   [homepage]

University of Michigan

Intelligent robots may increasingly participate in our society, driving on our roads, caring for our children and our elderly, and in many other ways. Can we trust them?

Society depends on cooperation, which requires partners to trust each other, and to refrain from exploiting the vulnerability trust entails. Straight-forward applications of game theory, where each partner acts to maximize his own expected reward, often lead to very poor outcomes. Societies impose social norms to lead individuals away from attractive selfish decisions, toward better outcomes for everyone. Robots will need to explicitly establish their trustworthiness as potential collaborative partners by following the social norms of society.

Philosophical theories of ethics (utilitarianism, deontology, virtue ethics) provide useful ideas toward designing robots that can follow social norms, but no one theory is adequate. A hybrid architecture combining multiple methods at different time-scales will be required to ensure that a robot behaves well in the complex human physical and social environment.

I conclude by applying these ideas to the Deadly Dilemma, a frequently-raised challenge for intelligent robots acting as self-driving cars.

About the speaker:

Benjamin Kuipers is a Professor of Computer Science and Engineering at the University of Michigan. He was previously a Professor of Computer Sciences at the University of Texas at Austin. He received his B.A. from Swarthmore College, and his Ph.D. from MIT. He served as Department Chair at UT Austin, and is a Fellow of AAAI, IEEE, and AAAS. He investigates the representation of commonsense and expert knowledge, with particular emphasis on the effective use of incomplete knowledge. His research accomplishments include developing the QSIM algorithm for qualitative simulation, the Spatial Semantic Hierarchy models of knowledge for robot exploration and mapping, and methods whereby an agent without prior knowledge of its sensors, effectors, or environment can learn its own sensorimotor structure, the spatial structure of its environment, and its own object and action abstractions for higher-level interactions with its world.

Wednesday, March 29, 2017, 10:30AM



From the Turing Test to Smart Partners: "Is Your System Smart Enough To Work With Us?"

Barbara Grosz   [homepage]

Harvard University

For much of its history, most research in the field of Artificial Intelligence (AI) has centered on issues of building intelligent machines, independently of a consideration of their interactions with people. As the world of computing has evolved, and systems–smart or otherwise–pervade ever more facets of life, tackling the challenges of building computer systems smart enough to work effectively with people, in groups as well as individually, has become of increasing importance. In this talk, I will argue for considering “people-in-the-loop” as central to AI for both pragmatic and cognitive science reasons, present some fundamental scientific questions this teamwork stance raises, and describe research by my group on computational models of collaboration and their use in supporting health-care coordination.

About the speaker:

Barbara J. Grosz is a Higgins Professor of Natural Sciences at Harvard University. Grosz specializes in natural language processing and multi-agent systems. She established the research field of computational modeling of discourse and developed some of the earliest computer dialogue systems, pioneered models of collaboration, and developed collaborative multi-agent systems and collaborative systems for human-computer communication. Grosz is known for her leadership in the field of artificial intelligence and her role in the establishment and leadership of interdisciplinary institutions, and she is widely respected for her contributions to the advancement of women in science.

From 2007-2011, Grosz served as interim dean and then dean of Harvard’s Radcliffe Institute for Advanced Study, and from 2001-2007 she was the Institute’s first dean of science, designing and building its science program.

Watch Online

Friday, April 7, 2017, 11:00AM



Generalization and Safety in Reinforcement Learning and Control

Aviv Tamar   [homepage]

University of California at Berkeley

Reinforcement learning (RL) is an area of machine learning that covers learning decision making and control through trial and error. Motivated by recent impressive results of combining RL with deep neural networks, there is a renewed excitement in the field, and a promise for autonomous robot control, among other AI domains.

This talk will focus on two challenges at the forefront of RL research: how to ensure the safety of the learned policy with respect to various sources of uncertainty, and how to learn policies that can generalize well to variations in the task and environment.

Our approach to safety is inspired by ideas from mathematical finance, and the theory of risk-sensitive decision making. We show that by incorporating risk measures into the RL optimization objective, we can learn policies that guarantee safety against noise and modelling errors. We develop efficient algorithms for learning with such risk-sensitive objectives, and provide theoretical convergence guarantees and error bounds for our methods.

To better cope with generalization in deep RL, we introduce the Value Iteration Network (VIN): a fully differentiable neural network with a 'planning module' embedded within. VINs are suitable for predicting outcomes that involve planning-based reasoning, such as policies for RL and control. Key to our approach is a novel differentiable approximation of the value iteration planning algorithm, which can be represented as a convolutional neural network. This network can be trained end-to-end to learn the parameters of a planning computation that is relevant for the task at hand. We show that by learning such a planning computation, VIN policies generalize better to variations in the task and environment.

I will also discuss an extension of the VIN idea to continuous control, based on a model-predictive control (MPC) framework. The performance of this method will be demonstrated on learning object manipulation tasks using the PR2 robot.

About the speaker:

Aviv Tamar received the M.Sc. and Ph.D. degrees in electrical engineering from the Technion - Israel Institute of Technology, in 2011 and 2015, respectively. Since 2015, he is a Post-Doc scholar at the EECS department of the University of California, Berkeley. His research interests include reinforcement-learning and robotics.

Friday, May 5, 2017, 11:00AM



Evolutionary (Big) Data Mining and Optimization

Amir H. Gandomi   [homepage]

Michigan State University

Evolutionary computation (EC) has been widely used during the last two decades and has remained a highly-researched topic, especially for complex real-world problems. The EC techniques are a subset of artificial intelligence, but they are slightly different from the classical methods in the sense that the intelligence of EC comes from biological systems or nature in general. The efficiency of EC is due to their significant ability to imitate the best features of nature which have evolved by natural selection over millions of years. The main theme of this presentation is about EC techniques and their application to real-world problems. On this basis, the presentation is divided into two separate sections including (big) data mining, and global optimization. First, applied evolutionary computing in data mining field will be presented, and then their new advances will be mentioned such as big data mining. Here, some of my studies on big data mining and modeling using EC and genetic programming, in particular, will be presented. As case studies, EC application in some real-world problems will be introduced. And then, application of EC for response modeling of a complex engineering system under seismic loads will be explained in detail to demonstrate the applicability of these algorithms on a complex real-world problem. In the second section, the evolutionary optimization algorithms and their key applications in the optimization of complex and nonlinear systems will be discussed. It will also be explained how such algorithms have been adopted to the real-world problems and how their advantages over the classical optimization problems are used in action. Optimization results of large-scale systems using EC will be presented which show the applicability of EC. Some heuristics will be explained which are adaptable with EC and they can significantly improve the optimization results.

About the speaker:

Amir H. Gandomi received his Ph.D. in engineering from the University of Akron, OH. He was selected as an elite in 2008 by National Elites Foundation. He used to be a lecturer in several universities, and he is currently a distinguished research fellow in an NSF center for Bio/computational Evolution in Action CONsortium (BEACON) located at Michigan State University, MI. He will join Information Systems faculties at Stevens Institute of Technology from Fall 2017. Dr. Gandomi has published over one hundred journal papers and four books. Some of those publications are now among the hottest papers in the field, and collectively have been cited more than 6,500 times (h-index = 42). He also served as associate editor, editor and guest editor in several prestigious journals and delivered several keynote/invited talks. Dr Gandomi is part of a NASA technology cluster on Big Data, Artificial Intelligence, Machine Learning and Autonomy. His research interests are global optimization and (big) data mining using machine learning and evolutionary computations in particular.

Friday, June 9, 2017, 11:00AM



The Music of Robots: Music Automata from Pythagoras to the Future

Roger B. Dannenberg   [homepage]

Carnegie Mellon University

Mankind has always been fascinated by mechanical simulations of human activity. A characteristic of Western thought is to understand our minds, bodies, and our world in mechanistic terms. Music especially has been viewed as a mathematical and mechanical process. As a result, musicians, scientists, and engineers have devised all sorts of mechanisms and systems for composing and performing music. I will review some of these activities and their evolution over the last 2500 years, ending with some work on music composition, music performance, and how humanoid robot performers change the way we hear music.

About the speaker:

Roger B. Dannenberg is Professor of Computer Science, Art, and Music at Carnegie Mellon University. Dannenberg is well known for his computer music research, especially in real-time interactive systems. His pioneering work in computer accompaniment led to three patents and the SmartMusic system now used by tens of thousands of music students. He is the co-creator of Audacity, a free audio editor with over 250 million downloads. Other innovations include the application of machine learning to music style classification and the automation of music structure analysis. As a trumpet player, he has performed in concert halls ranging from the historic Apollo Theater in Harlem to the Espace de Projection in Paris, and he is active in performing jazz, classical, and new works. As a composer, he writes for interactive electronics, and his opera co-composed with Jorge Sastre, La Mare dels Peixos (The Mother of the Fishes), was completed and performed in 2016.

Friday, July 21, 2017, 11:00AM



Coordinated Control of Multi-Robot Systems for Persistent Environmental Monitoring

Magnus Egerstedt   [homepage]

Georgia Institute of Technology

By now, we have a fairly good understanding of how to design coordinated control strategies for making teams of mobile robots achieve geometric objectives in a distributed manner, such as assembling shapes or covering areas. But, the mapping from high-level tasks to geometric objectives is not particularly well understood. In this talk, we investigate this topic in the context of persistent autonomy, i.e., we consider teams of robots, deployed in an environment over a sustained period of time, that can be recruited to perform a number of different tasks in a distributed, safe, and provably correct manner. This development will involve the composition of multiple barrier certificates for encoding the tasks and safety constraints, as well as a detour into ecology as a way of understanding how persistent environmental monitoring can be achieved by studying animals with low-energy life-styles, such as the three-toed sloth.

About the speaker:

Magnus Egerstedt is the Executive Director for the Institute for Robotics and Intelligent Machines at the Georgia Institute of Technology, where he also holds the Julian T. Hightower Chair in Systems and Controls in the School of Electrical and Computer Engineering. He received the M.S. degree in Engineering Physics and the Ph.D. degree in Applied Mathematics from the Royal Institute of Technology, Stockholm, Sweden, the B.A. degree in Philosophy from Stockholm University, and was a Postdoctoral Scholar at Harvard University. Dr. Egerstedt conducts research in the areas of control theory and robotics, with particular focus on control and coordination of complex networks, such as multi-robot systems, mobile sensor networks, and cyber-physical systems. Magnus Egerstedt is a Fellow of the IEEE, and has received a number of teaching and research awards, including the Ragazzini Award from the American Automatic Control Council, the Outstanding Doctoral Advisor Award and the HKN Outstanding Teacher Award from Georgia Tech, and the Alumni of the Year Award from the Royal Institute of Technology.

Tuesday, July 25, 2017, 11:00AM



Combining Human Ingenuity with Machine Systematicity for Data Science: Towards Artificial Intelligence Research Assistants

Yolanda Gil   [homepage]

USC Information Sciences Institute

Science faces complex integrative problems of increasing societal importance that are orders of magnitude more challenging every decade.  Computing has had a prime role in handling that complexity, but has focused mostly on managing large calculations over data.  There are many unexplored opportunities for Artificial Intelligence (AI) to break new barriers as assistants in research tasks that involve handling complex information spaces and searching systematically for plausible hypotheses.  Unlike AI innovations that have been very successful in the commercial arena, these AI research assistants for science have a crucial requirement to capture all forms of scientific knowledge in order to accept guidance from humans and to place new findings in the context of what is known.  In this talk, I will describe our ongoing research on intelligent workflow systems that capture scientific knowledge about data and analytic processes to assist scientists in analyzing data systematically and efficiently while providing customized explanations of their findings.  I will also describe a new research project to develop an AI research assistant capable of hypothesis-driven discovery by capturing experimental design strategies that determine what data and analysis methods are relevant for a given hypothesis.  I will discuss how AI research motivated by science challenges will significantly augment our ability to tackle fundamental problems in big data analytics that have been a barrier for progress in many areas.

About the speaker:

Dr. Yolanda Gil is Director of Knowledge Technologies and Associate Division Director at the Information Sciences Institute of the University of Southern California, and Research Professor in the Computer Science Department.  She received her M.S. and Ph. D. degrees in Computer Science from Carnegie Mellon University, with a focus on artificial intelligence.  Her research is on intelligent interfaces for knowledge capture, which she investigates in a variety of projects concerning knowledge-based planning and problem solving, information analysis and assessment of trust, semantic annotation and metadata, and community-wide development of knowledge bases.  In recent years, Dr. Gil has collaborated with scientists in different domains on semantic workflows, metadata capture, social knowledge collection, and computer-mediated collaboration.  She is a Fellow of the Association for Computing Machinery (ACM), and Past Chair of its Special Interest Group in Artificial Intelligence (SIGAI).  She is also Fellow of the Association for the Advancement of Artificial Intelligence (AAAI), and was elected as its 24th President in 2016.

[ FAI Archives ]

Fall 2022 - Spring 2023

Fall 2021 - Spring 2022

Fall 2020 - Spring 2021

Fall 2019 - Spring 2020

Fall 2018 - Spring 2019

Fall 2017 - Spring 2018

Fall 2016 - Spring 2017

Fall 2015 - Spring 2016

Fall 2014 - Spring 2015

Fall 2013 - Spring 2014

Fall 2012 - Spring 2013

Fall 2011 - Spring 2012

Fall 2010 - Spring 2011

Fall 2009 - Spring 2010

Fall 2008 - Spring 2009

Fall 2007 - Spring 2008

Fall 2006 - Spring 2007

Fall 2005 - Spring 2006

Spring 2005

Fall 2004

Spring 2004

Fall 2003

Spring 2003

Fall 2002

Spring 2002

Fall 2001

Spring 2001

Fall 2000

Spring 2000