The Codex model has demonstrated extraordinary competence in synthesizing code from natural language problem descriptions. However, in order to reveal unknown failure modes and hidden biases, such large-scale models must be systematically subjected to multiple and diverse evaluation studies. In this work, we evaluate the code synthesis capabilities of the Codex model based on a set of 115 Python problem statements from a popular competitive programming portal: HackerRank. Our evaluation shows that Codex is indeed proficient in Python, solving 96% of the problems in a zero-shot setting, and 100% of the problems in a few-shot setting. However, Codex exhibits clear signs of generating memorized code based on our evaluation. This is alarming, especially since the adoption and use of such models could directly impact how code is written and produced in the foreseeable future. With this in mind, we further discuss and highlight some of the prominent risks associated with large-scale models of source code. Finally, we propose a framework for code-synthesis evaluation using variations of problem statements based on mutations.
翻译:代码x模型在综合自然语言问题描述的代码方面表现出非凡的能力,然而,为了揭示未知的失败模式和隐藏的偏差,这些大型模型必须系统地接受多种不同的评估研究。在这项工作中,我们根据来自一个广受欢迎的竞争性编程门户HackerRank的115个Python问题声明,对代码x模型的代码合成能力进行了评估。我们的评估表明,代码x确实在Python中精通,解决了96%的零射场问题,在几个镜头中解决了100%的问题。然而,代码x展示了根据我们的评估生成记忆代码的明确迹象。这令人震惊,特别是因为采用和使用这种模型可以直接影响代码在可预见的将来如何写出和生成。铭记这一点,我们进一步讨论和强调与大规模源代码模型相关的一些突出风险。最后,我们提出了一个使用基于突变的问题声明来进行代码合成评估的框架。