Explainable AI
AI systems’ ability to explain their reasoning is critical to their utility since human users do not trust decisions from opaque "black boxes." Explainable AI studies the development of systems that provide visual, textual, or multi-modal explanations that help elucidate the reasoning behind their decisions.
Do Human Rationales Improve Machine Explanations? 2019
Julia Strout, Ye Zhang, Raymond J. Mooney, In Proceedings of the Second BlackboxNLP Workshop at ACL, pp. 56-62, Florence, Italy, August 2019.
Faithful Multimodal Explanation for Visual Question Answering 2019
Jialin Wu and Raymond J. Mooney, In Proceedings of the Second BlackboxNLP Workshop at ACL, pp. 103-112, Florence, Italy, August 2019.
Self-Critical Reasoning for Robust Visual Question Answering 2019
Jialin Wu and Raymond J. Mooney, In Proceedings of Neural Information Processing Systems (NeurIPS) , December 2019.
Explainable Improved Ensembling for Natural Language and Vision 2018
Nazneen Rajani, PhD Thesis, Department of Computer Science, The University of Texas at Austin.
Ensembling Visual Explanations for VQA 2017
Nazneen Fatema Rajani, Raymond J. Mooney, In Proceedings of the NIPS 2017 workshop on Visually-Grounded Interaction and Language (ViGIL), December 2017.
Using Explanations to Improve Ensembling of Visual Question Answering Systems 2017
Nazneen Fatema Rajani and Raymond J. Mooney, In Proceedings of the IJCAI 2017 Workshop on Explainable Artificial Intelligence (XAI), pp. 43-47, Melbourne, Australia, August 2017.
Explaining Recommendations: Satisfaction vs. Promotion 2005
Mustafa Bilgic and Raymond J. Mooney, In Proceedings of Beyond Personalization 2005: A Workshop on the Next Stage of Recommender Systems Research at the 2005 International Conference on Intelligent User Interfaces, San Diego, CA, J...
Explanation for Recommender Systems: Satisfaction vs. Promotion 2004
Mustafa Bilgic, unpublished. Undergraduate Honor Thesis, Department of Computer Sciences, University of Texas at Austin.