Forum for Artificial Intelligence


Forum for Artificial Intelligence

[ About FAI   |   Upcoming talks   |   Past talks ]



The Forum for Artificial Intelligence meets every other week (or so) to discuss scientific, philosophical, and cultural issues in artificial intelligence. Both technical research topics and broader inter-disciplinary aspects of AI are covered, and all are welcome to attend!

If you would like to be added to the FAI mailing list, subscribe here. If you have any questions or comments, please send email to Bo Xiong or Josiah Hanna.

The current schedule is also available as a Google Calendar.



[ Upcoming talks ]

Fri, September 21
11:00AM
GDC 6.302
Yu Gu
West Virginia University
Soft Autonomy: the Road towards Increasingly Intelligent Robots
Fri, September 21
1:00PM
GDC 3.816
Nihar Shah
Carnegie Mellon University
Battling Demons in Peer Review
Fri, October 5
11:00AM
GDC 6.302
Fangkai Yang
Maana Inc
Integrate Symbolic Planning with Reinforcement Learning for Interpretable, Data-Efficient and Robust Decision Making
Fri, October 19
11:00AM
GDC 6.302
Graham Neubig
Carnegie Mellon University
Towards Open-domain Generation of Programs from Natural Language
Fri, October 26
11:00AM
GDC 6.302
Stefano Ermon
Stanford University
TBD
Fri, November 9
11:00AM
GDC 6.302
Alex Ihler
TBA
TBD
Fri, November 30
11:00AM
GDC 6.302
Zhou Yu
TBA
TBD

Friday, September 21, 2018, 11:00AM



GDC 6.302

Soft Autonomy: the Road towards Increasingly Intelligent Robots

Yu Gu   [homepage]

West Virginia University

The ability for human designers to foresee uncertainties for robots, and write software programs accordingly, is severely limited by their mental simulation capabilities. This predictive approach of robot programing grows quickly in complexity and often fails to handle the infinite possibilities represented by the physical world. As a result, robots today are task-specific or “rigid”; i.e., having difficulty at handling tasks or conditions that were not planned for. During this talk, the speaker will present the vision of “soft autonomy” in making future robots adaptive, flexible, and resilient. He will draw lessons from over a decade of UAV flight testing research and the development of a sample return robot that won NASA’s Centennial challenge, and identify several research directions in making future robots more intelligent.

About the speaker:

Dr. Yu Gu is an Associate Professor in the Department of Mechanical and Aerospace Engineering at West Virginia University (WVU). His main research interest is to improve robots’ ability to function in increasingly complex environments and situations. Dr. Gu has designed over a dozen UAVs and ground robots and conducted numerous experiments. He was the leader of WVU Team Mountaineers that won NASA’s Sample Return Robot Centennial Challenge in 2014, 2015, and 2016 (total prize: $855,000). Dr. Gu is currently working on a precision robotics pollinator, an autonomous planetary rover, and cooperative exploration of underground tunnels with ground and aerial robots.

Friday, September 21, 2018, 1:00PM



GDC 3.816

Battling Demons in Peer Review

Nihar Shah   [homepage]

Carnegie Mellon University

Peer review is the backbone of scholarly research. It is however faced with a number of challenges (or demons) such as subjectivity, bias/miscalibration, noise, and strategic behavior. The growing number of submissions in many areas of research such as machine learning has significantly increased the scale of these demons. This talk will present some principled and practical approaches to battle these demons in peer review: (1) Subjectivity: How to ensure that all papers are judged by the same yardstick? (2) Bias/miscalibration: How to use ratings in presence of arbitrary or adversarial miscalibration? (3) Noise: How to assign reviewers to papers to simultaneously ensure fair and accurate evaluations in the presence of review noise? (4) Strategic behavior: How to insulate peer review from strategic behavior of author-reviewers? The work uses tools from social choice theory, statistics and learning theory, information theory, game theory and decision theory. (No prior knowledge on these topics will be assumed.)

About the speaker:

Nihar B. Shah is an Assistant Professor in the Machine Learning and Computer Science departments at CMU. He is a recipient of the 2017 David J. Sakrison memorial prize from EECS Berkeley for a "truly outstanding and innovative PhD thesis", the Microsoft Research PhD Fellowship 2014-16, the Berkeley Fellowship 2011-13, the IEEE Data Storage Best Paper and Best Student Paper Awards for the years 2011/2012, and the SVC Aiya Medal 2010. His research interests include statistics, machine learning, and game theory, with a current focus on applications to learning from people.

Friday, October 5, 2018, 11:00AM



GDC 6.302

Integrate Symbolic Planning with Reinforcement Learning for Interpretable, Data-Efficient and Robust Decision Making

Fangkai Yang   [homepage]

Maana Inc

Reinforcement learning and symbolic planning have both been used to build intelligent autonomous agents. Reinforcement learning relies on learning from interactions with real world, which often requires an unfeasibly large amount of experience, while deep reinforcement learning approaches are criticized for lack of interpretability. Symbolic planning relies on manually crafted symbolic knowledge, which may not be robust to domain uncertainties and changes. In this talk I explore several ways to integrate symbolic planning with hierarchical reinforcement learning to cope with decision-making in a dynamic environment with uncertainties. Symbolic plans are used to guide the agent's task execution and learning, and the learned experience is fed back to symbolic knowledge to improve planning. This method is evaluated in benchmark reinforcement learning problems, leading to data-efficient policy search and robust symbolic plans in complex domains and improve task-level interpretability.

About the speaker:

Dr. Fangkai Yang is a senior research scientist in Maana Inc, Bellevue, WA. He obtained his Ph.D degree on computer science from UT-Austin in 2014 under the supervision with Prof. Vladimir Lifschitz and and close collaboration with Prof. Peter Stone, working on studying theoretical foundations of representing, reasoning and planning with actions in logic formalisms and applications on task planning and learning for mobile intelligent robots. From 2014-2017 he was a research engineer of Schlumberger, Houston where he was involved in research and development on task planning and execution for the next generation of autonomous drilling rig and several other projects using answer set programming on industrial products for planning, scheduling and optimization. From 2017 to present he works for Maana Inc, focusing on integrating symbolic planning and reinforcement learning to build AI-powered decision making support platform. His research work is publicized in major AI conferences and journals such as IJCAI, KR, ICLP, ICAPS, LPNMR, TPLP, IJRR, etc.

Friday, October 19, 2018, 11:00AM



GDC 6.302

Towards Open-domain Generation of Programs from Natural Language

Graham Neubig   [homepage]

Carnegie Mellon University

Code generation from natural language is the task of generating programs written in a programming language (e.g. Python) given a command in natural language (e.g. English). For example, if the input is "sort list x in reverse order", then the system would be required to output "x.sort(reverse=True)" in Python. In this talk, I will talk about (1) machine learning models to perform this code generation, (2) methods for mining data from programming web sites such as stack overflow, and (3) methods for semi-supervised learning, that allow the model to learn from either English or Python on its own, without the corresponding parallel data.

About the speaker:

Graham Neubig is an assistant professor at the Language Technologies Institute of Carnegie Mellon University. His work focuses on natural language processing, specifically multi-lingual models that work in many different languages, and natural language interfaces that allow humans to communicate with computers in their own language. Much of this work relies on machine learning to create these systems from data, and he is also active in developing methods and algorithms for machine learning over natural language data. He publishes regularly in the top venues in natural language processing, machine learning, and speech, and his work occasionally wins awards such as best papers at EMNLP, EACL, and WNMT. He is also active in developing open-source software, and is the main developer of the DyNet neural network toolkit.

Friday, October 26, 2018, 11:00AM



GDC 6.302

TBD

Stefano Ermon   [homepage]

Stanford University

TBD

About the speaker:

TBA

Friday, November 9, 2018, 11:00AM



GDC 6.302

TBD

Alex Ihler   [homepage]

TBA

TBD

About the speaker:

TBA

Friday, November 30, 2018, 11:00AM



GDC 6.302

TBD

Zhou Yu   [homepage]

TBA

TBD

About the speaker:

TBA


[ Past talks]

[ FAI Archives ]

Fall 2017 - Spring 2018

Fall 2016 - Spring 2017

Fall 2015 - Spring 2016

Fall 2014 - Spring 2015

Fall 2013 - Spring 2014

Fall 2012 - Spring 2013

Fall 2011 - Spring 2012

Fall 2010 - Spring 2011

Fall 2009 - Spring 2010

Fall 2008 - Spring 2009

Fall 2007 - Spring 2008

Fall 2006 - Spring 2007

Fall 2005 - Spring 2006

Spring 2005

Fall 2004

Spring 2004

Fall 2003

Spring 2003

Fall 2002

Spring 2002

Fall 2001

Spring 2001

Fall 2000

Spring 2000