Forum for Artificial Intelligence


Forum for Artificial Intelligence

[ About FAI   |   Upcoming talks   |   Past talks ]



The Forum for Artificial Intelligence meets every other week (or so) to discuss scientific, philosophical, and cultural issues in artificial intelligence. Both technical research topics and broader inter-disciplinary aspects of AI are covered, and all are welcome to attend!

If you would like to be added to the FAI mailing list, subscribe here. If you have any questions or comments, please send email to Dilin Wang or Xingyi Zhou.




amd

[ Upcoming talks ]

Fri, November 22
11:00AM
GDC 6.302
Jacob Andreas
MIT
Language as a scaffold for learning
Fri, December 6
11:00AM
GDC 6.302
Dan Roth
University of Pennsylvania
TBD
Fri, January 24
11:00AM
GDC 6.302
Danqi Chen
Princeton
TBD
Fri, February 7
11:00AM
GDC 6.302
Dieter Fox
University of Washington
TBD

Friday, November 22, 2019, 11:00AM



GDC 6.302

Language as a scaffold for learning

Jacob Andreas   [homepage]

MIT

Abstract: Research on constructing and evaluating machine learning models is driven almost exclusively by examples. We specify the behavior of sentiment classifiers with labeled documents, guide learning of robot policies by assigning scores to rollouts, and interpret learned image representations by retrieving salient training images. Humans are able to learn from richer sources of supervision, and in the real world this supervision often takes the form of natural language: we learn word meanings from dictionaries and policies from cookbooks; we show understanding by explaining rather than demonstrating. This talk will explore three ways of leveraging language data to train and interpret machine learning models: using linguistic supervision instead of rewards to guide policy search, latent language to structure few-shot learning, and representation translation to generate textual explanations of learned models.

About the speaker:

Jacob Andreas is an assistant professor at MIT and a researcher at Microsoft Semantic Machines. His group's research is aimed at building natural langauge interfaces to intelligent systems and understanding the prediction problems that shape language and other representations. Jacob earned his Ph.D. from UC Berkeley, his M.Phil. from Cambridge (where he studied as a Churchill scholar) and his B.S. from Columbia. He has been the recipient of an NSF graduate fellowship, a Facebook fellowship, and paper awards at NAACL and ICML.

Friday, December 6, 2019, 11:00AM



GDC 6.302

TBD

Dan Roth   [homepage]

University of Pennsylvania

TBD

About the speaker:

TBD

Friday, January 24, 2020, 11:00AM



GDC 6.302

TBD

Danqi Chen   [homepage]

Princeton

TBD

About the speaker:

TBD

Friday, February 7, 2020, 11:00AM



GDC 6.302

TBD

Dieter Fox   [homepage]

University of Washington

TBD

About the speaker:

TBD


[ Past talks]

Fri, August 30
1:00PM
GDC 6.302
Simone Parisi
TU Darmstadt
Scalable and Autonomous Reinforcement Learning
Fri, October 4
11:00AM
GDC 6.302
Katerina Fragkiadaki
Carnegie Mellon University
Embodied Visual Recognition with Implicit 3D Feature Representations
Fri, October 25
11:00AM
GDC 6.302
Nazneen Rajani
Salesforce Research
Leveraging Explanations for Performance and Generalization in NLP and RL
Fri, November 1
11:00AM
GDC 6.302
Ying Ding
School of Information, UT Austin
Semantic link predication for drug discovery
Fri, November 15
11:00AM
GDC 6.302
Yoav Artzi
Cornell
Robot Control and Collaboration in Situated Instruction Following

Friday, August 30, 2019, 1:00PM



GDC 6.302

Scalable and Autonomous Reinforcement Learning

Simone Parisi   [homepage]

TU Darmstadt

Over the course of the last decade, reinforcement learning has developed into a promising tool for learning a large variety of task. A lot of effort has been directed towards scaling reinforcement learning to solve high-dimensional problems, such as robotic tasks with many degrees of freedom or videogames. These advances, however, generally depend on hand-crafted state descriptions, pre-structured parameterized policies, or require large amount of data or human interaction. This pre-structuring is arguably in stark contrast to the goal of autonomous learning. In this talk, I discuss the need of systematic methods to increase the autonomy of traditional learning systems, and focus on the problems of stability when little data is available, the presence of multiple conflicting objectives and high-dimensional input, and the need of novel exploration strategies in reinforcement learning.

About the speaker:

Simone Parisi joined the Intelligent Autonomous System lab on October, 1st, 2014 as a PhD student. His research interests include, amongst others, reinforcement learning, robotics, multi-objective optimization, and intrinsic motivation. During his PhD, Simone is working on Scalable Autonomous Reinforcement Learning (ScARL), developing and evaluating new methods in the field of robotics to guarantee both high degree of autonomy and the ability to solve complex task. Before his PhD, Simone completed his MSc in Computer Science Engineering at the Politecnico di Milano, Italy, and at the University of Queensland, Australia. His thesis, entitled “Study and analysis of policy gradient approaches for multi-objective decision problems, was written under the supervision of Marcello Restelli and Matteo Pirotta.

Friday, October 4, 2019, 11:00AM



GDC 6.302

Embodied Visual Recognition with Implicit 3D Feature Representations

Katerina Fragkiadaki   [homepage]

Carnegie Mellon University

Abstract: Current state-of-the-art CNNs localize rare object categories in internet photos, yet, they miss basic facts that a two-year-old has mastered: that objects have 3D extent, they persist over time despite changes in the camera view, they do not 3D intersect, and others. We will discuss neural architectures that given video streams learn to disentangle scene appearance from camera and object motion, and distill the former into world-centric 3D feature maps. We will show the proposed architectures learn object permanence, can generate RGB views from novel viewpoints in truly novel scenes, have objects emerge in 3D without human annotations, support grounding of language in 3D visual simulations, and learn intuitive physics in a persistent 3D feature space. In this way, they overcome many limitations of 2D CNNs for video perception, model learning and language grounding.

About the speaker:

Katerina Fragkiadaki is an Assistant Professor in the Machine Learning Department in Carnegie Mellon University. He received her Ph.D. from University of Pennsylvania in 2013 and was a postdoctoral fellow in UC Berkeley and Google research (2013-2016). She has done a lot of work on video segmentation, motion dynamics learning and on the area of injecting geometry into deep visual learning. Her group develops algorithms for mobile computer vision and learning of Physics and common sense for agents that move around and interact with the world. She received a best Ph.D. thesis award in 2013 and served as the area chair in CVPR 2018, ICML 2019, ICLR 2019, CVPR 2020.

Friday, October 25, 2019, 11:00AM



GDC 6.302

Leveraging Explanations for Performance and Generalization in NLP and RL

Nazneen Rajani   [homepage]

Salesforce Research

Abstract: Deep learning models perform poorly on tasks that require commonsense reasoning, which often necessitates some form of world knowledge or reasoning over information not immediately present in the input. In the first part of the talk, I will discuss how language models can be leveraged to generate natural language explanations which are not just interpretable but can also be used to improve performance on a downstream task such as CommonsenseQA and empirically show that explanations are a way to incorporate commonsense reasoning in neural networks. Further, I will discuss how explanations can be transferred to other tasks without fine-tuning. In the second part of the talk, I will talk about Sherlock, a framework for probing generalization in RL. Although deep reinforcement learning (RL) has seen great success in training agents for complex simulated environments, RL agents often neither generalize nor are interpretable. Sherlock then quantifies the impact of human-interpretable features by comparing generalization performance with the distance between MDPs. Our approach is based on the intuition that, unlike RL agents, humans can adapt quickly to changes in their environment because they base their policy on robust features that are human-interpretable. As such, RL agents may generalize well if they make decisions based on such human- interpretable features.

About the speaker:

Nazneen Rajani is a research scientist at Salesforce where she leads the efforts on Explainable AI (XAI), specifically focusing on leveraging explanations not just for interpretability but also generalization. Before joining Salesforce, she graduated with a Ph.D. at UT working with Ray Mooney at the intersection of language and vision. She has published and served as a reviewer for top conferences including ACL, EMNLP, NAACL, and IJCAI. More details about her publications can be found here http://www.nazneenrajani.com

Friday, November 1, 2019, 11:00AM



GDC 6.302

Semantic link predication for drug discovery

Ying Ding   [homepage]

School of Information, UT Austin

Abstract: A critical barrier in current drug discovery is the inability to utilize public datasets in an integrated fashion to fully understand the actions of drugs and chemical compounds on biological systems. There is a need to intelligently integrate heterogeneous datasets pertaining to compounds, drugs, targets, genes, diseases, and drug side effects now available to enable effective network data mining algorithms to extract important biological relationships. In this talk, we demonstrate the semantic integration of 25 different databases and develop various mining and predication methods to identify hidden associations that could provide valuable directions for further exploration at the experimental level.

About the speaker:

Bio: Dr. Ying Ding is Bill & Lewis Suit Professor at School of Information, University of Texas at Austin. Before that, she was a professor and director of graduate studies for data science program at School of Informatics, Computing, and Engineering at Indiana University. She has led the effort to develop the online data science graduate program for Indiana University. She also worked as a senior researcher at Department of Computer Science, University of Innsburck (Austria) and Free University of Amsterdam (the Netherlands). She has been involved in various NIH, NSF and European-Union funded projects. She has published 240+ papers in journals, conferences, and workshops, and served as the program committee member for 200+ international conferences. She is the co-editor of book series called Semantic Web Synthesis by Morgan & Claypool publisher, the co-editor-in-chief for Data Intelligence published by MIT Press and Chinese Academy of Sciences, and serves as the editorial board member for several top journals in Information Science and Semantic Web. She is the co-founder of Data2Discovery company advancing cutting edge AI technologies in drug discovery and healthcare. Her current research interests include data-driven science of science, AI in healthcare, Semantic Web, knowledge graph, data science, scholarly communication, and the application of Web technologies.

Friday, November 15, 2019, 11:00AM



GDC 6.302

Robot Control and Collaboration in Situated Instruction Following

Yoav Artzi   [homepage]

Cornell

ABSTRACT: I will present two projects studying the problem of learning to follow natural language instructions. I will present new datasets, a class of interpretable models for instruction following, learning methods that combine the benefits of supervised and reinforcement learning, and new evaluation protocols. In the first part, I will discuss the task of executing natural language instructions with a robotic agent. In contrast to existing work, we do not engineer formal representations of language meaning or the robot environment. Instead, we learn to directly map raw observations and language to low-level continuous control of a quadcopter drone. In the second part, I will propose the task of learning to follow sequences of instructions in a collaborative scenario, where both the user and the system execute actions in the environment and the user controls the system using natural language. To study this problem, we build CerealBar, a multi-player 3D game where a leader instructs a follower, and both act in the environment together to accomplish complex goals. The two projects were led by Valts Blukis, Alane Suhr, and collaborators. Additional information about both projects is available here: https://github.com/lil-lab/drif; http://lil.nlp.cornell.edu/cerealbar/

About the speaker:

Yoav Artzi is an Assistant Professor in the Department of Computer Science and Cornell Tech at Cornell University. His research focuses on learning expressive models for natural language understanding, most recently in situated interactive scenarios. He received an NSF CAREER award, paper awards in EMNLP 2015, ACL 2017, and NAACL 2018, a Google Focused Research Award, and faculty awards from Google, Facebook, and Workday. Yoav holds a B.Sc. summa cum laude from Tel Aviv University and a Ph.D. from the University of Washington.

[ FAI Archives ]

Fall 2017 - Spring 2018

Fall 2016 - Spring 2017

Fall 2015 - Spring 2016

Fall 2014 - Spring 2015

Fall 2013 - Spring 2014

Fall 2012 - Spring 2013

Fall 2011 - Spring 2012

Fall 2010 - Spring 2011

Fall 2009 - Spring 2010

Fall 2008 - Spring 2009

Fall 2007 - Spring 2008

Fall 2006 - Spring 2007

Fall 2005 - Spring 2006

Spring 2005

Fall 2004

Spring 2004

Fall 2003

Spring 2003

Fall 2002

Spring 2002

Fall 2001

Spring 2001

Fall 2000

Spring 2000