Pre-trained programming language (PL) models (such as CodeT5, CodeBERT, GraphCodeBERT, etc.,) have the potential to automate software engineering tasks involving code understanding and code generation. However, these models are not robust to changes in the input and thus, are potentially susceptible to adversarial attacks. We propose, CodeAttack, a simple yet effective black-box attack model that uses code structure to generate imperceptible, effective, and minimally perturbed adversarial code samples. We demonstrate the vulnerabilities of the state-of-the-art PL models to code-specific adversarial attacks. We evaluate the transferability of CodeAttack on several code-code (translation and repair) and code-NL (summarization) tasks across different programming languages. CodeAttack outperforms state-of-the-art adversarial NLP attack models to achieve the best overall performance while being more efficient and imperceptible.
翻译:未经培训的编程语言模型(如,CodT5、DCBERT、GreabCodeBERT等)有可能使涉及代码理解和代码生成的软件工程任务自动化,然而,这些模型对输入的变化并不健全,因此有可能受到对抗性攻击的影响。我们提议,CodAttack,这是一个简单而有效的黑盒攻击模型,使用代码结构生成不易察觉、有效且受干扰极小的对抗性代码样本。我们展示了最先进的PLM模型对具体代码对抗性攻击的脆弱性。我们评估了代码Attack对若干代码(翻译和修理)和代码NL(合成)任务在不同编程语言上的可转让性。CodAtack超越了最先进的对抗性NLP攻击模型,以取得最佳的总体性业绩,同时提高效率和不可接受性。