Forum for Artificial Intelligence


Forum for Artificial Intelligence

[ About FAI   |   Upcoming talks   |   Past talks ]



The Forum for Artificial Intelligence meets every other week (or so) to discuss scientific, philosophical, and cultural issues in artificial intelligence. Both technical research topics and broader inter-disciplinary aspects of AI are covered, and all are welcome to attend!

If you would like to be added to the FAI mailing list, subscribe here. If you have any questions or comments, please send email to Karl Pichotta or Bo Xiong.

The current schedule is also available as a Google Calendar.



[ Upcoming talks ]

Fri, September 30
11:00AM
GDC 2.216 (GDC Auditorium)
Peter Stone
University of Texas at Austin
Artificial Intelligence and Life in 2030
Fri, October 14
11:00AM
GDC 6.302
Junyi Jessy Li
University of Pennsylvania
Title TBD
Fri, October 21
11:00AM
GDC 1.304
Jia Deng
University of Michigan
Going Deeper in Semantics and Mid-Level Vision
Mon, October 31
11:00AM
GDC 6.302
Ido Dagan
Bar Ilan University
Title TBD
Fri, November 18
11:00AM
GDC 6.302
John Schulman
OpenAI
Title TBD
Fri, December 2
11:00AM
GDC 6.302
Sam Bowman
New York University
Learning neural networks for sentence understanding with the Stanford NLI corpus
Wed, March 29
11:00AM
Location TBD
Barbara Grosz
Harvard University
Title TBD

Friday, September 30, 2016, 11:00AM



GDC 2.216 (GDC Auditorium)

Artificial Intelligence and Life in 2030

Peter Stone   [homepage]

University of Texas at Austin

The One Hundred Year Study on Artificial Intelligence, launched in the fall of 2014, is a long-term investigation of the field of Artificial Intelligence (AI) and its influences on people, their communities, and society. As its core activity, the Standing Committee that oversees the One Hundred Year Study forms a Study Panel every five years to assess the current state of AI. The first Study Panel report, published in September 2016, focusses on eight domains the panelists considered to be most salient: transportation; service robots; healthcare; education; low-resource communities; public safety and security; employment and workplace; and entertainment. In each of these domains, the report both reflects on progress in the past fifteen years and anticipates developments in the coming fifteen years. The report also includes recommendations concerning AI-related policy.

This talk by the Study Panel Chair, will briefly describe the process of creating the report and summarize its contents. The floor will then be opened for questions and discussion.

Attendees are strongly encouraged to read at least the executive summary, overview, and callouts (in the margins) of the report before the session: https://ai100.stanford.edu/2016-report

About the speaker:

Dr. Peter Stone is the David Bruton, Jr. Centennial Professor and Associate Chair of Computer Science, as well as Chair of the Robotics Portfolio Program, at the University of Texas at Austin. In 2013 he was awarded the University of Texas System Regents' Outstanding Teaching Award and in 2014 he was inducted into the UT Austin Academy of Distinguished Teachers, earning him the title of University Distinguished Teaching Professor. Professor Stone's research interests in Artificial Intelligence include machine learning (especially reinforcement learning), multiagent systems, robotics, and e-commerce. Professor Stone received his Ph.D in Computer Science in 1998 from Carnegie Mellon University. From 1999 to 2002 he was a Senior Technical Staff Member in the Artificial Intelligence Principles Research Department at AT&T Labs - Research. He is an Alfred P. Sloan Research Fellow, Guggenheim Fellow, AAAI Fellow, Fulbright Scholar, and 2004 ONR Young Investigator. In 2003, he won an NSF CAREER award for his proposed long term research on learning agents in dynamic, collaborative, and adversarial multiagent environments, in 2007 he received the prestigious IJCAI Computers and Thought Award, given biannually to the top AI researcher under the age of 35, and in 2016 he was awarded the ACM/SIGAI Autonomous Agents Research Award.

Friday, October 14, 2016, 11:00AM



GDC 6.302

Title TBD

Junyi Jessy Li   [homepage]

University of Pennsylvania

About the speaker:

Notes:

Friday, October 21, 2016, 11:00AM



GDC 1.304

Going Deeper in Semantics and Mid-Level Vision

Jia Deng   [homepage]

University of Michigan

Achieving human-level visual understanding requires extracting deeper semantics from images. In particular, it entails moving beyond detecting objects to understanding the relations between them. It also demands progress in mid-level vision, which extracts deeper geometric information such as pose and 3D. In this talk I will present recent work on both fronts. I will describe efforts on recognizing human-object interactions, an important type of relations between visual entities. I will present a state-of-the-art method on human pose estimation. Finally, I will discuss recovering 3D from a single image, a fundamental mid-level vision problem.

About the speaker:

Jia Deng is an Assistant Professor of Computer Science and Engineering at the University of Michigan. His research focus is on computer vision and machine learning, in particular, achieving human-level visual understanding by integrating perception, cognition, and learning. He received his Ph.D. from Princeton University and his B.Eng. from Tsinghua University, both in computer science. He is a recipient of the Yahoo ACE Award, a Google Faculty Research Award, the ICCV Marr Prize, and the ECCV Best Paper Award.

Monday, October 31, 2016, 11:00AM



GDC 6.302

Title TBD

Ido Dagan   [homepage]

Bar Ilan University

About the speaker:

Notes:

Friday, November 18, 2016, 11:00AM



GDC 6.302

Title TBD

John Schulman   [homepage]

OpenAI

About the speaker:

Notes:

Friday, December 2, 2016, 11:00AM



GDC 6.302

Learning neural networks for sentence understanding with the Stanford NLI corpus

Sam Bowman   [homepage]

New York University

In this two-part talk, I’ll first introduce the Stanford Natural Language Inference corpus (SNLI, EMNLP ‘15), then present the Stack-Augmented Parser-Interpreter NN (SPINN, ACL ‘16), a model developed on that corpus.

SNLI is a human-annotated corpus for training and evaluating machine learning models on natural language inference, the task of judging the truth of one sentence conditioned on the truth of another. Natural language inference is a particularly effective way to evaluate machine learning models for sentence understanding, and SNLI’s large size (570k sentence pairs) makes it newly possible to evaluate low-bias models like neural networks in this setting. I discuss our novel methods for data collection, discuss the quality of the corpus, and present some results from other research groups that have used the corpus.

SPINN is a neural network model for sentence encoding that builds on past work on tree-structured neural networks (e.g. Socher et al. ‘11, Tai et al ‘15). It re-implements the core operations of those networks in a way that improves upon them in three ways: It improves the quality of the resulting sentence representations by combining sequence- and tree-based approaches to semantic composition, it makes it possible to run the model without an external parser, and it enables the use of minibatching and GPU computation for the first time, yielding speedups of up to 25× and making tree-based models competitive in speed with simple RNNs for the first time.

About the speaker:

Sam Bowman recently started as an assistant professor at New York University, appointed in the Department of Linguistics and the Center for Data Science. He recently completed a PhD in the Department of Linguistics at Stanford University as a member of the Stanford Natural Language Processing Group. Sam's research is focused on building artificial neural network models for solving large-scale language understanding problems within natural language processing, and in using those models to learn about the human capacity for language understanding.

Wednesday, March 29, 2017, 11:00AM



Location TBD

Title TBD

Barbara Grosz   [homepage]

Harvard University

Abstract TBD

About the speaker:

Notes:


[ Past talks]

Fri, August 19
3:00PM
GDC 6.302
Vibhav Gogate
University of Texas at Dallas
Approximate Counting and Lifting for Scalable Inference and Learning in Markov Logic
Mon, August 29
11:00AM
GDC 6.302
Subramanian Ramamoorthy
University of Edinburgh
Representations and Models for Collaboratively Intelligent Robots
Fri, September 2
11:00AM
GDC 6.302
Kory Mathewson
University of Alberta
Developing Machine Intelligence to Improve Bionic Limb Control
Fri, September 9
11:00AM
GDC 6.302
Brenna Argall
Northwestern University
Turning Assistive Machines into Assistive Robots

Friday, August 19, 2016, 3:00PM



GDC 6.302

Approximate Counting and Lifting for Scalable Inference and Learning in Markov Logic

Vibhav Gogate   [homepage]

University of Texas at Dallas

Markov logic networks (MLNs) combine the relational representation power of first-order logic with uncertainty representation power of probability. They often yield a compact representation, and as a result are routinely used in a wide variety of application domains such as natural language understanding, computer vision and Bio-informatics, for representing background knowledge. However, scaling up inference and learning in them is notoriously difficult, which limits their wide applicability. In this talk, I will describe two complementary approaches, one based on approximate counting and the second based on approximate lifting, for scaling up inference and learning in MLNs. The two approaches help remedy the following issue that adversely affects scalability: each first-order formula typically yields tens of millions of ground formulas and even algorithms that are linear in the number of ground formulas are computationally infeasible. The approximate counting approaches are linear in the number of ground atoms (random variables), which can be much smaller than the number of ground formulas (features), while the approximate lifting approaches substantially reduce the number of ground atoms, further reducing the complexity. I will present theoretical guarantees as well as experimental results, clearly demonstrating the power of our new approaches. (Joint work with Parag Singla, Deepak Venugopal, David Smith, Tuan Pham, and Somdeb Sarkhel).

About the speaker:

Vibhav Gogate is an Assistant Professor in the Computer Science Department at the University of Texas at Dallas. He got his Ph.D. at University of California, Irvine in 2009 and then did a two-year post-doc at University of Washington. His broad research interests are in artificial intelligence, machine learning and data mining. His ongoing focus is on probabilistic graphical models, their first-order logic based extensions such as Markov logic, and probabilistic programming. He is the co-winner of the 2010 UAI approximate probabilistic inference challenge and the 2012 PASCAL probabilistic inference competition.

Monday, August 29, 2016, 11:00AM



GDC 6.302

Representations and Models for Collaboratively Intelligent Robots

Subramanian Ramamoorthy   [homepage]

University of Edinburgh

We are motivated by the problem of building autonomous robots that are able to work collaboratively with other agents, such as human co-workers. One key attribute of such an autonomous system is the ability to make predictions about the actions and intentions of other agents in a dynamic environment - both to interpret the activity context as it is being played out and to adapt actions in response to that contextual information.

Drawing on examples from robotic systems we have developed in my lab, including mobile robots that can navigate effectively in crowded spaces and humanoid robots that can cooperate in assembly tasks, I will present recent results addressing the questions of how to efficiently capture the hierarchical nature of activities, and how to rapidly estimate latent factors, such as hidden goals and intent.

Firstly, I will describe a procedure for topological trajectory classification, using the concept of persistent homology, which enables unsupervised extraction of certain kinds of relational concepts in motion data. One use of this representation is in devising a multi-scale version of Bayesian recursive estimation, which is a step towards reliably grounding human instructions in the realized activity.

Finally, I will describe work with a human-robot interface based on the use of mobile 3D eye tracking as a signal for intention inference. We achieve this by learning a probabilistic generative model of fixations conditioned on the task that the person is executing. Intention inference is achieved through inversion of this model, fixations depending on the location of objects or regions of interest in the environment. Using preliminary experimental results, I will discuss how this approach is useful in the grounding of plan symbols to their representation in the environment.

About the speaker:

Dr. Subramanian Ramamoorthy is a Reader (Associate Professor) in the School of Informatics, University of Edinburgh, where he has been on the faculty since 2007. He is a Coordinator of the EPSRC Robotarium Research Facility, and Executive Committee Member for the Centre for Doctoral Training in Robotics and Autonomous Systems. He received his PhD in Electrical and Computer Engineering from The University of Texas at Austin in 2007. He is an elected Member of the Young Academy of Scotland at the Royal Society of Edinburgh.

His research focus has been on robot learning and decision-making under uncertainty, with emphasis on problems involving human-robot and multi-robot collaborative activities. These problems are solved using a combination machine learning techniques with emphasis on issues of transfer, online and reinforcement learning as well as new representations and analysis techniques based on geometric/topological abstractions.

His work has been recognised by nominations for Best Paper Awards at major international conferences - ICRA 2008, IROS 2010, ICDL 2012 and EACL 2014. He serves in editorial and programme committee roles for conferences and journals in the areas of AI and Robotics. He leads Team Edinferno, the first UK entry in the Standard Platform League at the RoboCup International Competition. This work has received media coverage, including by BBC News and The Telegraph, and has resulted in many public engagement activities, such as at the Royal Society Summer Science Exhibition, Edinburgh International Science festival and Edinburgh Festival Fringe.

Before joining the School of Informatics, he was a Staff Engineer with National Instruments Corp., where he contributed to five products in the areas of motion control, computer vision and dynamic simulation. This work resulted in seven US patents and numerous industry awards for product innovation.

Friday, September 2, 2016, 11:00AM



GDC 6.302

Developing Machine Intelligence to Improve Bionic Limb Control

Kory Mathewson   [homepage]

University of Alberta

Prosthetic limbs are artificial devices which serve as a replacement for missing and/or lost body parts. The first documented history of prosthetics is in the Rigveda, a Hindu text written over 3000 years ago. Advances in artificial limb hardware and interface technology have facilitated some improvements in functionality restoration, but upper limb amputation remains a difficult challenge for prosthetic replacement. Many prosthetic users reject the use of prosthetic limbs due to control system issues, lack of natural feedback, and functional limitations.

We propose the use of new high-performance computer algorithms, specifically real-time artificial intelligence (AI), to address these limitations by improving a bionic limb with real-time AI. Using this AI, the limb can make predictions about the future and can share control with the user. The limb could remember task specific action sequences relevant in certain environments (think of playing piano).

The integration of AI in a prosthetic limb is paradigm shifting. Prosthetic limbs understand very little about the environment or the user. Our work changes provides rich information to the limb in a usable, safe, stable, and reliable way. Improved two-way communication between the human and the device is a major step toward prosthetic embodiment. This allows for learning from ongoing interaction with the user, providing a personalized experience.

This work is a collaboration between the Bionic Limbs for Improved Natural Control lab and the Reinforcement Learning and Artificial Intelligence lab at the University of Alberta, in Edmonton, Canada.

About the speaker:

Kory Mathewson is currently an intern on the Twitter Cortex team working in San Francisco, California. His passions lay at the interface between humans and other intelligent systems. He is completing a PhD at the University of Alberta under the supervision of Dr. Richard Sutton and Dr. Patrick Pilarski. In this work, he is progressing interactive machine learning algorithms for deployment on robotic platforms and big data personalization systems. He also holds a Master's degree in Biomedical Engineering and a Bachelors degree in Electrical Engineering. To find out more, visit http://korymathewson.com.

Friday, September 9, 2016, 11:00AM



GDC 6.302

Turning Assistive Machines into Assistive Robots

Brenna Argall   [homepage]

Northwestern University

It is a paradox that often the more severe a person's motor impairment, the more challenging it is for them to operate the very assistive machines which might enhance their quality of life. A primary aim of my lab is to address this confound by incorporating robotics autonomy and intelligence into assistive machines---to offload some of the control burden from the user. Robots already synthetically sense, act in and reason about the world, and these technologies can be leveraged to help bridge the gap left by sensory, motor or cognitive impairments in the users of assistive machines. However, here the human-robot team is a very particular one: the robot is physically supporting or attached to the human, replacing or enhancing lost or diminished function. In this case getting the allocation of control between the human and robot right is absolutely essential, and will be critical for the adoption of physically assistive robots within larger society. This talk will overview some of the ongoing projects and studies in my lab, whose research lies at the intersection of artificial intelligence, rehabilitation robotics and machine learning. We are working with a range of hardware platforms, including smart wheelchairs and assistive robotic arms. A distinguishing theme present within many of our projects is that the machine automation is customizable---to a user's unique and changing physical abilities, personal preferences or even financial means.

About the speaker:

Brenna Argall is the June and Donald Brewer Junior Professor of Electrical Engineering & Computer Science at Northwestern University, and also an assistant professor in the Department of Mechanical Engineering and the Department of Physical Medicine & Rehabilitation. Her research lies at the intersection of robotics, machine learning and human rehabilitation. She is director of the assistive & rehabilitation robotics laboratory (argallab) at the Rehabilitation Institute of Chicago (RIC), the premier rehabilitation hospital in the United States, and her lab's mission is to advance human ability through robotics autonomy. Argall is a 2016 recipient of the NSF CAREER award. Her Ph.D. in Robotics (2009) was received from the Robotics Institute at Carnegie Mellon University, as well as her M.S. in Robotics (2006) and B.S. in Mathematics (2002). Prior to joining Northwestern, she was a postdoctoral fellow (2009-2011) at the École Polytechnique Fédérale de Lausanne (EPFL), and prior to graduate school she held a Computational Biology position at the National Institutes of Health (NIH).

[ FAI Archives ]

Fall 2015 - Spring 2016

Fall 2014 - Spring 2015

Fall 2013 - Spring 2014

Fall 2012 - Spring 2013

Fall 2011 - Spring 2012

Fall 2010 - Spring 2011

Fall 2009 - Spring 2010

Fall 2008 - Spring 2009

Fall 2007 - Spring 2008

Fall 2006 - Spring 2007

Fall 2005 - Spring 2006

Spring 2005

Fall 2004

Spring 2004

Fall 2003

Spring 2003

Fall 2002

Spring 2002

Fall 2001

Spring 2001

Fall 2000

Spring 2000