Do Human Rationales Improve Machine Explanations? (2019)
Work on “learning with rationales” shows that humans providing explanations to a machine learning system can improve the system’s predictive accuracy. However, this work has not been connected to work in “explainable AI” which concerns machines explaining their reasoning to humans. In this work, we show that learning with rationales can also improve the quality of the machine’s explanations as evaluated by human judges. Specifically, we present experiments showing that, for CNN-based text classification, explanations generated using “supervised attention” are judged superior to explanations generated using normal unsupervised attention.
View:
PDF
Citation:
In Proceedings of the Second BlackboxNLP Workshop at ACL, pp. 56-62, Florence, Italy, August 2019.
Bibtex:

Presentation:
Poster
Raymond J. Mooney Faculty mooney [at] cs utexas edu
Julia Strout Masters Alumni jstrout [at] utexas edu