AI-powered coding assistants such as GitHub's Copilot and OpenAI's ChatGPT have achieved notable success in automating code generation. However, these tools rely on pre-trained Large Language Models (LLMs) that are typically trained on human-written code sourced from open-source project hosting sites like GitHub, which often contains inherent security vulnerabilities. These vulnerabilities may then be mirrored in the code generated by these LLMs, a critical risk revealed and highlighted by recent empirical studies. In this work, we present an exploratory study on whether fine-tuning pre-trained LLMs on datasets of vulnerability-fixing commits can promote secure code generation. We explored full fine-tuning and two parameter-efficient fine-tuning techniques (LoRA and IA3) on four pre-trained LLMs for code generation. We crawled a fine-tuning dataset (14,622 C/C++ files) for secure code generation by collecting code fixes of confirmed vulnerabilities from open-source repositories. Our evaluation dataset comprises 52 vulnerability scenarios designed to cover the top most dangerous C/C++ CWEs. Our exploration reveals that fine-tuning LLMs using PEFT techniques can enhance secure code generation. We observe maximum improvements in security of 6.4% in C language and 5.0% in C++ language. In addition, we compared between the fine-tuning approaches and the prompt-based approaches. The LoRA-tuned models outperform the prompt-based approaches in secure code generation. We found that fine-tuning with function-level and block-level datasets achieves the best secure code generation performance, compared to the alternatives (file-level and line-level).
翻译:暂无翻译