Large Language Models (LLMs) such as OpenAI Codex are increasingly being used as AI-based coding assistants. Understanding the impact of these tools on developers' code is paramount, especially as recent work showed that LLMs may suggest cybersecurity vulnerabilities. We conduct a security-driven user study (N=58) to assess code written by student programmers when assisted by LLMs. Given the potential severity of low-level bugs as well as their relative frequency in real-world projects, we tasked participants with implementing a singly-linked 'shopping list' structure in C. Our results indicate that the security impact in this setting (low-level C with pointer and array manipulations) is small: AI-assisted users produce critical security bugs at a rate no greater than 10% more than the control, indicating the use of LLMs does not introduce new security risks.
翻译:OpenAI Codex等大型语言模型(LLMs)正越来越多地被用作AI的编码助理。 了解这些工具对开发者代码的影响至关重要, 特别是因为最近的工作表明LLMs可能表明网络安全的脆弱性。 我们开展了一项安全驱动用户研究(N=58),以评估由学生程序员在LLMs协助下编写的代码。 鉴于低级错误的潜在严重性及其在现实世界项目中的相对频率,我们责成参与者在C. 中实施一个单独连接的“购物清单”结构。 我们的结果表明,这一设置的安全影响(低C级,有指针和阵列操作)很小:AI协助用户产生的关键安全漏洞比控制率高出10%以上,这表明使用LMMs不会带来新的安全风险。</s>