Large language models (LLMs), such as OpenAI's Codex, have demonstrated their potential to generate code from natural language descriptions across a wide range of programming tasks. Several benchmarks have recently emerged to evaluate the ability of LLMs to generate functionally correct code from natural language intent with respect to a set of hidden test cases. This has enabled the research community to identify significant and reproducible advancements in LLM capabilities. However, there is currently a lack of benchmark datasets for assessing the ability of LLMs to generate functionally correct code edits based on natural language descriptions of intended changes. This paper aims to address this gap by motivating the problem NL2Fix of translating natural language descriptions of code changes (namely bug fixes described in Issue reports in repositories) into correct code fixes. To this end, we introduce Defects4J-NL2Fix, a dataset of 283 Java programs from the popular Defects4J dataset augmented with high-level descriptions of bug fixes, and empirically evaluate the performance of several state-of-the-art LLMs for the this task. Results show that these LLMS together are capable of generating plausible fixes for 64.6% of the bugs, and the best LLM-based technique can achieve up to 21.20% top-1 and 35.68% top-5 accuracy on this benchmark.
翻译:从自然语言问题描述中生成功能正确的代码编辑
摘要:
大型语言模型(LLM)如OpenAI的Codex已经展示了它们在广泛的编程任务中从自然语言描述生成代码的潜力。 最近出现了几个基准测试,以评估LLM从自然语言意图生成功能正确代码的能力,涉及一组隐藏测试用例。这使得研究社区能够确定LLM性能中显着且可重复的进展。但是,当前缺乏基准数据集,用于评估LLM根据旨在改变的自然语言描述生成功能正确的代码编辑的能力。本文旨在通过提出将自然语言代码更改的描述(即存储库中的问题报告中描述的错误修复)翻译为正确代码修复的NL2Fix问题,来填补这一空白。为此,我们引入了Defects4J-NL2Fix数据集,该数据集包含283个Java程序,这些程序来自流行的Defects4J数据集,其中加入了修复错误的高级描述,并对该任务的几种最先进的LLM进行了经验性评估。结果显示,这些LLMS的组合能够对64.6%的错误生成合理的修复,而最佳的基于LLM的技术可以在这个基准测试上达到21.20%的top-1和35.68%的top-5准确度。