Current approaches to program synthesis with Large Language Models (LLMs) exhibit a "near miss syndrome": they tend to generate programs that semantically resemble the correct answer (as measured by text similarity metrics or human evaluation), but achieve a low or even zero accuracy as measured by unit tests due to small imperfections, such as the wrong input or output format. This calls for an approach known as Synthesize, Execute, Debug (SED), whereby a draft of the solution is generated first, followed by a program repair phase addressing the failed tests. To effectively apply this approach to instruction-driven LLMs, one needs to determine which prompts perform best as instructions for LLMs, as well as strike a balance between repairing unsuccessful programs and replacing them with newly generated ones. We explore these trade-offs empirically, comparing replace-focused, repair-focused, and hybrid debug strategies, as well as different template-based and model-based prompt-generation techniques. We use OpenAI Codex as the LLM and Program Synthesis Benchmark 2 as a database of problem descriptions and tests for evaluation. The resulting framework outperforms both conventional usage of Codex without the repair phase and traditional genetic programming approaches.
翻译:基于大语言模型的完全自主编程
当前用大型语言模型进行程序合成的方法存在“近似且不精确”的问题,常常生成与正确答案语义上相似(根据文本相似度度量或人类评估),但在单元测试上实现低甚至零准确率,由于小的瑕疵,比如错误的输入或输出格式。这需要一种 Synthesize, Execute, Debug(SED)的方法,首先生成解决方案的草稿,然后进行程序修复阶段,以解决失败的测试。要有效地将这种方法应用于指令驱动型LLMs,就需要确定哪些提示最适合作为LLMs的指令,并在修复不成功的程序和替换它们为新生成的程序之间取得平衡。我们通过实证方法探讨这些权衡,比较了替换重点、修复重点和混合调试策略,以及不同的基于模板和模型的提示生成技术。我们使用OpenAI Codex作为LLM和Program Synthesis Benchmark 2作为问题描述和评估测试数据库。所得到的框架优于传统的不带修复阶段的Codex使用方法和传统的遗传编程方法。