Faithful Multimodal Explanation for Visual Question Answering (2019)
AI systems’ ability to explain their reasoning is critical to their utility and trustworthiness. Deep neural networks have enabled significant progress on many challenging problems such as visual question answering (VQA). However, most of them are opaque black boxes with limited explanatory capability. This paper presents a novel approach to developing a high-performing VQA system that can elucidate its answers with integrated textual and visual explanations that faithfully reflect important aspects of its underlying reasoning process while capturing the style of comprehensible human explanations. Extensive experimental evaluation demonstrates the advantages of this approach compared to competing methods using both automated metrics and human evaluation.
View:
PDF
Citation:
In Proceedings of the Second BlackboxNLP Workshop at ACL, pp. 103-112, Florence, Italy, August 2019.
Bibtex:

Presentation:
Slides (PPT)
Raymond J. Mooney Faculty mooney [at] cs utexas edu
Jialin Wu Ph.D. Alumni jialinwu [at] utexas edu