Recently, large language models for code generation have achieved breakthroughs in several programming language tasks. Their advances in competition-level programming problems have made them an emerging pillar in AI-assisted pair programming. Tools such as GitHub Copilot are already part of the daily programming workflow and are used by more than a million developers. The training data for these models is usually collected from open-source repositories (e.g., GitHub) that contain software faults and security vulnerabilities. This unsanitized training data can lead language models to learn these vulnerabilities and propagate them in the code generation procedure. Given the wide use of these models in the daily workflow of developers, it is crucial to study the security aspects of these models systematically. In this work, we propose the first approach to automatically finding security vulnerabilities in black-box code generation models. To achieve this, we propose a novel black-box inversion approach based on few-shot prompting. We evaluate the effectiveness of our approach by examining code generation models in the generation of high-risk security weaknesses. We show that our approach automatically and systematically finds 1000s of security vulnerabilities in various code generation models, including the commercial black-box model GitHub Copilot.
翻译:最近,用于代码生成的大型语言模式在一些编程语言任务中取得了突破;在竞争层面的编程问题方面的进展使它们成为AI协助的配对编程中的一个新兴支柱。GitHub Copolit等工具已经是日常编程工作流程的一部分,并被100多万开发者使用。这些模式的培训数据通常从含有软件缺陷和安全弱点的开放源码库(例如GitHub)收集。这种不卫生的培训数据可以引导语言模式学习这些弱点,并在代码生成程序中传播这些弱点。鉴于这些模式在开发者的日常工作流程中广泛使用,因此有必要系统地研究这些模式的安全方面。在这项工作中,我们提出了在黑箱代码生成模型中自动发现安全弱点的第一种办法。为了实现这一目标,我们建议采用一种新颖的黑箱转换方法,以几发提示为基础。我们通过在生成高风险安全弱点时审查代码生成模型来评估我们的方法的有效性。我们展示的方法自动和系统地发现包括商业黑箱模型GiHub Copro在内的各种代码生成模型中的1000个安全弱点。