Explaining Competitive-Level Programming Solutions using LLMs (2023)
Jierui Li, Szymon Tworkowski, Yingying Wu, Raymond Mooney
In this paper, we approach competitive-level programming problem-solving as a composite task of reasoning and code generation. We propose a novel method to automatically annotate natural language explanations to pairs. We show that despite poor performance in solving competitive-level programming problems, state-of-the-art LLMs exhibit a strong capacity in describing and explaining solutions. Our explanation generation methodology can generate a structured solution explanation for the problem containing descriptions and analysis. To evaluate the quality of the annotated explanations, we examine their effectiveness in two aspects: 1) satisfying the human programming expert who authored the oracle solution, and 2) aiding LLMs in solving problems more effectively. The experimental results on the CodeContests dataset demonstrate that while LLM GPT3.5's and GPT-4's abilities in describing the solution are comparable, GPT-4 shows a better understanding of the key idea behind the solution.
View:
PDF, Arxiv
Citation:
Association of Computational Linguistics (ACL), Natural Language Reasoning and Structured Explanations Workshop (2023).
Bibtex:

Presentation:
Poster
Jierui Li Ph.D. Student jierui [at] cs utexas edu
Raymond J. Mooney Faculty mooney [at] cs utexas edu