UTCS Explainable AI Group Meeting


The goal of Explainable AI (XAI) is to understand how agents, for instance neural networks, make decisions. We focus on generating visual and/or textual explanations for the decision making process. We would also like to understand how networks represent different concepts and explore approaches to gather information from the interpretable concepts to help train networks.

Spring 2021 Meeting Time & Place

  • Biweekly meetings on Friday, 9 a.m. - 10 a.m. We will use Zoom for the meeting, please join in via this link

Scheduled Meetings

DateTimePlacePaper
04/23/21 9AM Zoom Shweta Narkar et al.
Model LineUpper: Supporting Interactive Model Comparison at Multiple Levels for AutoML , IUI 2021

* Future meetings subject to rearrangement.

* Please send in suggestions for new papers to discuss

Past Meetings

DateTimePlacePaper
04/09/21 9AM Zoom Michael Sejr Schlichtkrull et al.
Interpreting Graph Neural Networks for NLP With Differentiable Edge Masking , ICLR
03/26/21 9AM Zoom Bhavya Ghai et al.
Explainable Active Learning (XAL): Toward AI Explanationsas Interfaces for Machine Teachers , CSCW
02/26/21 9AM Zoom Vipin Pillai and Hamed Pirsiavash
Explainable Models with Consistent Interpretations, AAAI
02/12/21 9AM Zoom Suraj Srinivas and Francois Fleuret
Rethinking the Role of Gradient-based Attribution Methods for Model Interpretability, ICLR
12/02/20 2PM Zoom Neema Kotonya and Francesca Toni
Explainable Automated Fact-Checking for Public Health Claims, EMNLP
11/18/20 2PM Zoom Sarthak Jain et al.
Learning to Faithfully Rationalize by Construction, ACL
11/04/20 2PM Zoom Alon Jacovi and Yoav Goldberg
Aligning Faithful Interpretations with their Social Attribution, ACL
10/21/20 2PM Zoom Pepa Atanasova et al.
Generating Fact Checking Explanations, ACL
10/7/20 2PM Zoom Sanjay Subramanian et al.
Obtaining Faithful Interpretations from Compositional Neural Networks, ACL
09/23/20 2PM Zoom Gagan Bansal et al.
Does the Whole Exceed its Parts? The Effect of AI Explanations on Complementary Team Performance, ACM
09/09/20 2PM Zoom Patrick Schramowsk et al.
Making deep neural networks right for the right scientific reasons by interacting with their explanations, Nature
03/04/20 3PM GDC 3.516 Ann-Kathrin Dombrowski et al
Explanations Can Be Manipulated and Geometry is to Blame, NeurIPS 2020
02/05/20 3PM GDC 3.516 Amirata Ghorbani et al
Towards Automatic Concept-based Explanations, NeurIPS 2020
11/15/19 3PM GDC 3.516 Sofia Serrano and Noah A. Smith
Is attention interpretable?, ACL 2019
11/01/19 3PM GDC 3.516 Forough Poursabzi-Sangdeh et al
Manipulating and Measuring Model Interpretability, NIPS 2017 Workshop on Transparent and Interpretable Machine Learning in Safety Critical Environments
10/18/19 3PM GDC 3.516 Nazneen Fatema Rajani, Bryan McCann, Caiming Xiong, Richard Socher
Explain Yourself! Leveraging Language Models for Commonsense Reasoning, ACL 2019
10/04/19 3PM GDC 3.516 Zachary C. Lipton
The Mythos of Model Interpretability, ICML 2016 Human Interpretability in MachineLearning Workshop
09/20/19 3PM GDC 4.816 Joost Bastings, Wilker Aziz, Ivan Titov
Interpretable Neural Predictions with Differentiable Binary Variables, ACL 2019
05/10/19 3PM GDC 3.516 Ramprasaath R. Selvaraju, Stefan Lee, Yilin Shen, Hongxia Jin, Dhruv Batra, Devi Parikh
Taking a HINT: Leveraging Explanations to Make Vision and Language Models More Grounded, ICCV 2019
04/26/19 3PM GDC 3.516 Andrew Ross, Michael C. Hughes, and Finale Doshi-Velez
Right for the Right Reasons: Training Differentiable Models by Constraining their Explanations, IJCAI 2017
04/12/19 3PM GDC 3.516 Sarthak Jain, Byron C. Wallace
Attention is not Explanation, NAACL 2019
03/29/19 3PM GDC 3.516 Quanshi Zhang, Ying Nian Wu, and Song-Chun Zhu
Interpretable CNNs, TPAMI
03/08/19 3PM GDC 3.516 Chaofan Chen, Oscar Li, Chaofan Tao, Alina Jade Barnett, Jonathan Su, Cynthia Rudin
This Looks Like That: Deep Learning for Interpretable Image Recognition,
02/22/19 3PM GDC 3.516 Cynthia Rudin
Please Stop Explaining Black Box Models for High Stakes Decisions, NeurIPS 2018 (Workshop on Critiquing and Correct-ing Trends in Machine Learning)
02/08/19 3PM GDC 3.516 Bolei Zhou*, Yiyou Sun*, David Bau*, Antonio Torralba
Interpretable Basis Decomposition for Visual Explanation, ECCV 2018
11/30/18 3PM GDC 3.516 Tao Lei, Regina Barzilay and Tommi Jaakkola
Rationalizing Neural Predictions, EMNLP 2016
11/16/18 3PM GDC 3.516 Arjun Chandrasekaran, Viraj Prabhu, Deshraj Yadav, Prithvijit Chattopadhyay, Devi Parikh
Do explanations make VQA models more predictable to a human, EMNLP 2018
11/16/18 3PM GDC 3.516 Yujia Bao, Shiyu Chang, Mo Yu, Regina Barzilay
Deriving Machine Attention from Human Rationales, EMNLP 2018
11/02/18 3PM GDC 3.516 Pang Wei Koh and Percy Liang
Understanding Black-box Predictions via Influence Functions, ICML 2017
10/19/18 3PM GDC 3.516 Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin
“Why Should I Trust You?”Explaining the Predictions of Any Classifier, KDD 2016
10/05/18 3PM GDC 3.516 Ye Zhang, Iain Marshall, Byron C. Wallace
Rationale-Augmented Convolutional Neural Networksfor Text Classification, EMNLP 2016
09/21/18 3PM GDC 3.516 Lisa Anne Hendricks, Ronghang Hu, Trevor Darrell, Zeynep Akata
Grounding Visual Explanations, ECCV, 2018

Proposed Papers

TopicSuggested Papers
Explanation Evaluation Mustafa Bilgic and Raymond J. Mooney
Explaining Recommendations: Satisfaction vs. Promotion, IUI 2005
Trust Score Heinrich Jiang, Been Kim, Maya Gupta
To Trust Or Not To Trust A Classifier, NIPS 2018
Prototypes and Criticisms Been Kim, Rajiv Khanna, Oluwasanmi Koyejo
Examples are not Enough, Learn to Criticize! Criticism for Interpretability, NIPS 2016

If you find a certain paper interesting and would like to recommend reading, please feel free to let us know during the meeting or e-mail Jialin Wu.


Subscribe to Email List

Template is from Text2animation