Large language models, e.g., Codex and AlphaCode, have shown capability in producing working code for many programming tasks. However, the success rate of existing models remains low, especially for complex programming tasks. One of the reasons is that language models lack awareness of program semantics (e.g., type information), resulting in incorrect programs (or even programs which do not compile). In this paper, we systematically study whether automated program repair (APR) techniques can fix the incorrect solutions produced by language models in LeetCode contests. The goal is to study whether APR techniques can enhance confidence in the code produced by language models. Our study revealed that: (1) automatically generated codes share some common programming mistakes with human-crafted solutions, indicating existing APR tools have the potential to fix auto-generated code; (2) TBar and Recoder, two well-known Java APR tools based on templates and learning respectively, increase the number of solved tasks from 37 to 42 on 60 easy level tasks, while increase from 5 to 9 on 53 medium-level programming tasks; (3) given bug location information provided by a statistical fault localization approach, the newly released Codex edit mode, which supports changing existing code, may outperform existing APR tools in fixing incorrect solutions. By analyzing the experimental results generated by these tools, we provide several suggestions on how to improve current APR tools.
翻译:大型语言模型,如 Codex 和 AlphaCode 等,显示有能力为许多方案编制任务制定工作守则,然而,现有模式的成功率仍然很低,特别是在复杂的方案编制任务中,现有模式的成功率仍然很低,原因之一是语言模型缺乏对程序语义学(例如类型信息)的认识,导致程序不正确(甚至程序不汇编)。在本文件中,我们系统研究自动程序修理(APR)技术能否纠正LeetCode 竞赛中语言模型产生的错误解决方案。目标是研究非洲复兴同侪审议机制技术能否增强对语言模型产生的代码的信心。我们的研究显示:(1) 自动生成的代码与人造的解决方案共享一些共同的编程错误,表明现有的非洲复兴同侪审议机制工具有可能修复自动生成的代码;(2) TBar 和 Recoder,两个以模板和学习为基础的著名 Java APRARC 工具,分别将60个简单的任务中的解决任务从37个增加到42个,而53个中等级别的编程任务则从5个增加到9个;(3) 由统计错误定位方法提供的错误定位信息,新发布的代码编码编码编码编码编码系统将改进工具。