Forum for Artificial Intelligence


Forum for Artificial Intelligence

[ About FAI   |   Upcoming talks   |   Past talks ]



The Forum for Artificial Intelligence meets every other week (or so) to discuss scientific, philosophical, and cultural issues in artificial intelligence. Both technical research topics and broader inter-disciplinary aspects of AI are covered, and all are welcome to attend!

If you would like to be added to the FAI mailing list, subscribe here. If you have any questions or comments, please send email to Karl Pichotta or Will Xie.

The current schedule is also available as a Google Calendar.



[ Upcoming talks ]

Mon, August 29
11:00AM
GDC 6.302
Subramanian Ramamoorthy
University of Edinburgh
Representations and Models for Collaboratively Intelligent Robots
Fri, September 2
11:00AM
GDC 6.302
Kory Mathewson
University of Alberta
Developing Machine Intelligence to Improve Bionic Limb Control
Fri, September 9
11:00AM
GDC 6.302
Brenna Argall
Northwestern University
Turning Assistive Machines into Assistive Robots
Fri, October 7
11:00AM
GDC 6.302
Sam Bowman
New York University
Title TBD
Fri, October 28
11:00AM
GDC 6.302
Jia Deng
University of Michigan
Title TBD

Monday, August 29, 2016, 11:00AM



GDC 6.302

Representations and Models for Collaboratively Intelligent Robots

Subramanian Ramamoorthy   [homepage]

University of Edinburgh

We are motivated by the problem of building autonomous robots that are able to work collaboratively with other agents, such as human co-workers. One key attribute of such an autonomous system is the ability to make predictions about the actions and intentions of other agents in a dynamic environment - both to interpret the activity context as it is being played out and to adapt actions in response to that contextual information.

Drawing on examples from robotic systems we have developed in my lab, including mobile robots that can navigate effectively in crowded spaces and humanoid robots that can cooperate in assembly tasks, I will present recent results addressing the questions of how to efficiently capture the hierarchical nature of activities, and how to rapidly estimate latent factors, such as hidden goals and intent.

Firstly, I will describe a procedure for topological trajectory classification, using the concept of persistent homology, which enables unsupervised extraction of certain kinds of relational concepts in motion data. One use of this representation is in devising a multi-scale version of Bayesian recursive estimation, which is a step towards reliably grounding human instructions in the realized activity.

Finally, I will describe work with a human-robot interface based on the use of mobile 3D eye tracking as a signal for intention inference. We achieve this by learning a probabilistic generative model of fixations conditioned on the task that the person is executing. Intention inference is achieved through inversion of this model, fixations depending on the location of objects or regions of interest in the environment. Using preliminary experimental results, I will discuss how this approach is useful in the grounding of plan symbols to their representation in the environment.

About the speaker:

Dr. Subramanian Ramamoorthy is a Reader (Associate Professor) in the School of Informatics, University of Edinburgh, where he has been on the faculty since 2007. He is a Coordinator of the EPSRC Robotarium Research Facility, and Executive Committee Member for the Centre for Doctoral Training in Robotics and Autonomous Systems. He received his PhD in Electrical and Computer Engineering from The University of Texas at Austin in 2007. He is an elected Member of the Young Academy of Scotland at the Royal Society of Edinburgh.

His research focus has been on robot learning and decision-making under uncertainty, with emphasis on problems involving human-robot and multi-robot collaborative activities. These problems are solved using a combination machine learning techniques with emphasis on issues of transfer, online and reinforcement learning as well as new representations and analysis techniques based on geometric/topological abstractions.

His work has been recognised by nominations for Best Paper Awards at major international conferences - ICRA 2008, IROS 2010, ICDL 2012 and EACL 2014. He serves in editorial and programme committee roles for conferences and journals in the areas of AI and Robotics. He leads Team Edinferno, the first UK entry in the Standard Platform League at the RoboCup International Competition. This work has received media coverage, including by BBC News and The Telegraph, and has resulted in many public engagement activities, such as at the Royal Society Summer Science Exhibition, Edinburgh International Science festival and Edinburgh Festival Fringe.

Before joining the School of Informatics, he was a Staff Engineer with National Instruments Corp., where he contributed to five products in the areas of motion control, computer vision and dynamic simulation. This work resulted in seven US patents and numerous industry awards for product innovation.

Friday, September 2, 2016, 11:00AM



GDC 6.302

Developing Machine Intelligence to Improve Bionic Limb Control

Kory Mathewson   [homepage]

University of Alberta

Prosthetic limbs are artificial devices which serve as a replacement for missing and/or lost body parts. The first documented history of prosthetics is in the Rigveda, a Hindu text written over 3000 years ago. Advances in artificial limb hardware and interface technology have facilitated some improvements in functionality restoration, but upper limb amputation remains a difficult challenge for prosthetic replacement. Many prosthetic users reject the use of prosthetic limbs due to control system issues, lack of natural feedback, and functional limitations.

We propose the use of new high-performance computer algorithms, specifically real-time artificial intelligence (AI), to address these limitations by improving a bionic limb with real-time AI. Using this AI, the limb can make predictions about the future and can share control with the user. The limb could remember task specific action sequences relevant in certain environments (think of playing piano).

The integration of AI in a prosthetic limb is paradigm shifting. Prosthetic limbs understand very little about the environment or the user. Our work changes provides rich information to the limb in a usable, safe, stable, and reliable way. Improved two-way communication between the human and the device is a major step toward prosthetic embodiment. This allows for learning from ongoing interaction with the user, providing a personalized experience.

This work is a collaboration between the Bionic Limbs for Improved Natural Control lab and the Reinforcement Learning and Artificial Intelligence lab at the University of Alberta, in Edmonton, Canada.

About the speaker:

Kory Mathewson is currently an intern on the Twitter Cortex team working in San Francisco, California. His passions lay at the interface between humans and other intelligent systems. He is completing a PhD at the University of Alberta under the supervision of Dr. Richard Sutton and Dr. Patrick Pilarski. In this work, he is progressing interactive machine learning algorithms for deployment on robotic platforms and big data personalization systems. He also holds a Master's degree in Biomedical Engineering and a Bachelors degree in Electrical Engineering. To find out more, visit http://korymathewson.com.

Friday, September 9, 2016, 11:00AM



GDC 6.302

Turning Assistive Machines into Assistive Robots

Brenna Argall   [homepage]

Northwestern University

It is a paradox that often the more severe a person's motor impairment, the more challenging it is for them to operate the very assistive machines which might enhance their quality of life. A primary aim of my lab is to address this confound by incorporating robotics autonomy and intelligence into assistive machines---to offload some of the control burden from the user. Robots already synthetically sense, act in and reason about the world, and these technologies can be leveraged to help bridge the gap left by sensory, motor or cognitive impairments in the users of assistive machines. However, here the human-robot team is a very particular one: the robot is physically supporting or attached to the human, replacing or enhancing lost or diminished function. In this case getting the allocation of control between the human and robot right is absolutely essential, and will be critical for the adoption of physically assistive robots within larger society. This talk will overview some of the ongoing projects and studies in my lab, whose research lies at the intersection of artificial intelligence, rehabilitation robotics and machine learning. We are working with a range of hardware platforms, including smart wheelchairs and assistive robotic arms. A distinguishing theme present within many of our projects is that the machine automation is customizable---to a user's unique and changing physical abilities, personal preferences or even financial means.

About the speaker:

Brenna Argall is the June and Donald Brewer Junior Professor of Electrical Engineering & Computer Science at Northwestern University, and also an assistant professor in the Department of Mechanical Engineering and the Department of Physical Medicine & Rehabilitation. Her research lies at the intersection of robotics, machine learning and human rehabilitation. She is director of the assistive & rehabilitation robotics laboratory (argallab) at the Rehabilitation Institute of Chicago (RIC), the premier rehabilitation hospital in the United States, and her lab's mission is to advance human ability through robotics autonomy. Argall is a 2016 recipient of the NSF CAREER award. Her Ph.D. in Robotics (2009) was received from the Robotics Institute at Carnegie Mellon University, as well as her M.S. in Robotics (2006) and B.S. in Mathematics (2002). Prior to joining Northwestern, she was a postdoctoral fellow (2009-2011) at the École Polytechnique Fédérale de Lausanne (EPFL), and prior to graduate school she held a Computational Biology position at the National Institutes of Health (NIH).

Friday, October 7, 2016, 11:00AM



GDC 6.302

Title TBD

Sam Bowman   [homepage]

New York University

About the speaker:

Notes:

Friday, October 28, 2016, 11:00AM



GDC 6.302

Title TBD

Jia Deng   [homepage]

University of Michigan

Abstract TBD

About the speaker:

Jia Deng is an Assistant Professor of Computer Science and Engineering at the University of Michigan. His research focus is on computer vision and machine learning, in particular, achieving human-level visual understanding by integrating perception, cognition, and learning. He received his Ph.D. from Princeton University and his B.Eng. from Tsinghua University, both in computer science. He is a recipient of the Yahoo ACE Award, a Google Faculty Research Award, the ICCV Marr Prize, and the ECCV Best Paper Award.


[ Past talks]

Fri, August 19
3:00PM
GDC 6.302
Vibhav Gogate
University of Texas at Dallas
Approximate Counting and Lifting for Scalable Inference and Learning in Markov Logic

Friday, August 19, 2016, 3:00PM



GDC 6.302

Approximate Counting and Lifting for Scalable Inference and Learning in Markov Logic

Vibhav Gogate   [homepage]

University of Texas at Dallas

Markov logic networks (MLNs) combine the relational representation power of first-order logic with uncertainty representation power of probability. They often yield a compact representation, and as a result are routinely used in a wide variety of application domains such as natural language understanding, computer vision and Bio-informatics, for representing background knowledge. However, scaling up inference and learning in them is notoriously difficult, which limits their wide applicability. In this talk, I will describe two complementary approaches, one based on approximate counting and the second based on approximate lifting, for scaling up inference and learning in MLNs. The two approaches help remedy the following issue that adversely affects scalability: each first-order formula typically yields tens of millions of ground formulas and even algorithms that are linear in the number of ground formulas are computationally infeasible. The approximate counting approaches are linear in the number of ground atoms (random variables), which can be much smaller than the number of ground formulas (features), while the approximate lifting approaches substantially reduce the number of ground atoms, further reducing the complexity. I will present theoretical guarantees as well as experimental results, clearly demonstrating the power of our new approaches. (Joint work with Parag Singla, Deepak Venugopal, David Smith, Tuan Pham, and Somdeb Sarkhel).

About the speaker:

Vibhav Gogate is an Assistant Professor in the Computer Science Department at the University of Texas at Dallas. He got his Ph.D. at University of California, Irvine in 2009 and then did a two-year post-doc at University of Washington. His broad research interests are in artificial intelligence, machine learning and data mining. His ongoing focus is on probabilistic graphical models, their first-order logic based extensions such as Markov logic, and probabilistic programming. He is the co-winner of the 2010 UAI approximate probabilistic inference challenge and the 2012 PASCAL probabilistic inference competition.

[ FAI Archives ]

Fall 2015 - Spring 2016

Fall 2014 - Spring 2015

Fall 2013 - Spring 2014

Fall 2012 - Spring 2013

Fall 2011 - Spring 2012

Fall 2010 - Spring 2011

Fall 2009 - Spring 2010

Fall 2008 - Spring 2009

Fall 2007 - Spring 2008

Fall 2006 - Spring 2007

Fall 2005 - Spring 2006

Spring 2005

Fall 2004

Spring 2004

Fall 2003

Spring 2003

Fall 2002

Spring 2002

Fall 2001

Spring 2001

Fall 2000

Spring 2000