Pre-trained programming language (PL) models (such as CodeT5, CodeBERT, GraphCodeBERT, etc.,) have the potential to automate software engineering tasks involving code understanding and code generation. However, these models operate in the natural channel of code, i.e., they are primarily concerned with the human understanding of the code. They are not robust to changes in the input and thus, are potentially susceptible to adversarial attacks in the natural channel. We propose, CodeAttack, a simple yet effective black-box attack model that uses code structure to generate effective, efficient, and imperceptible adversarial code samples and demonstrates the vulnerabilities of the state-of-the-art PL models to code-specific adversarial attacks. We evaluate the transferability of CodeAttack on several code-code (translation and repair) and code-NL (summarization) tasks across different programming languages. CodeAttack outperforms state-of-the-art adversarial NLP attack models to achieve the best overall drop in performance while being more efficient, imperceptible, consistent, and fluent. The code can be found at https://github.com/reddy-lab-code-research/CodeAttack.
翻译:预先培训的编程语言模型(如,CodT5、DCBERT、GreabCodeBERT等)有可能使涉及代码理解和代码生成的软件工程任务自动化,然而,这些模型在代码的自然通道上运作,即,这些模型主要与人类对代码的理解有关,它们对于输入的改变并不强大,因此有可能在自然通道上受到对抗性攻击。我们提议,CodAttack,这是一个简单而有效的黑盒攻击模型,使用代码结构生成有效、高效和易见的对立代码样本,并展示最先进的PLM模型在代码特定对立式袭击中的易用性。我们评估了代码Atack对若干代码(翻译和修理)和代码NL(校正)任务在不同编程语言上的可转移性。我们提议,CodAtack超越了最先进的对抗性NLP攻击模式,以取得最佳的总体性下降,同时效率更高、可接受性、连贯、一致和流化的Adrack-code-comt。