UTCS Explainable AI Group Meeting


The goal of Explainable AI (XAI) is to understand how agents, for instance neural networks, make decisions. We focus on generating visual and/or textual explanations for the decision making process. We would also like to understand how networks represent different concepts and explore approaches to gather information from the interpretable concepts to help train networks.

Fall 2018 Meeting Time & Place

  • Biweekly meetings on Friday, 3 p.m. - 4 p.m. in GDC 3.516

Scheduled Meetings

DateTimePlacePaper
11/30/18 3PM GDC 3.516 Tao Lei, Regina Barzilay and Tommi Jaakkola
Rationalizing Neural Predictions, EMNLP 2016

* Future meetings subject to rearrangement.

* Please send in suggestions for new papers to discuss

Past Meetings

DateTimePlacePaper
11/16/18 3PM GDC 3.516 Arjun Chandrasekaran, Viraj Prabhu, Deshraj Yadav, Prithvijit Chattopadhyay, Devi Parikh
Do explanations make VQA models more predictable to a human, EMNLP 2018
11/16/18 3PM GDC 3.516 Yujia Bao, Shiyu Chang, Mo Yu, Regina Barzilay
Deriving Machine Attention from Human Rationales, EMNLP 2018
11/02/18 3PM GDC 3.516 Pang Wei Koh and Percy Liang
Understanding Black-box Predictions via Influence Functions, ICML 2017
10/19/18 3PM GDC 3.516 Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin
“Why Should I Trust You?”Explaining the Predictions of Any Classifier, KDD 2016
10/05/18 3PM GDC 3.516 Ye Zhang, Iain Marshall, Byron C. Wallace
Rationale-Augmented Convolutional Neural Networksfor Text Classification, EMNLP 2016
09/21/18 3PM GDC 3.516 Lisa Anne Hendricks, Ronghang Hu, Trevor Darrell, Zeynep Akata
Grounding Visual Explanations, ECCV, 2018

Proposed Papers

TopicSuggested Papers
Explanation Evaluation Mustafa Bilgic and Raymond J. Mooney
Explaining Recommendations: Satisfaction vs. Promotion, IUI 2005
Hidden representation Bolei Zhou*, Yiyou Sun*, David Bau*, Antonio Torralba
Interpretable Basis Decomposition for Visual Explanation, ECCV 2018
Trust Score Heinrich Jiang, Been Kim, Maya Gupta
To Trust Or Not To Trust A Classifier, NIPS 2018
Prototypes and Criticisms Been Kim, Rajiv Khanna, Oluwasanmi Koyejo
Examples are not Enough, Learn to Criticize! Criticism for Interpretability, NIPS 2016

If you find a certain paper interesting and would like to recommend reading, please feel free to let us know during the meeting or e-mail Jialin Wu.


Subscribe to Email List

Template is from Text2animation