Recent progress in large language code models (LLCMs) has led to a dramatic surge in the use of software development. Nevertheless, it is widely known that training a well-performed LLCM requires a plethora of workforce for collecting the data and high quality annotation. Additionally, the training dataset may be proprietary (or partially open source to the public), and the training process is often conducted on a large-scale cluster of GPUs with high costs. Inspired by the recent success of imitation attacks in stealing computer vision and natural language models, this work launches the first imitation attack on LLCMs: by querying a target LLCM with carefully-designed queries and collecting the outputs, the adversary can train an imitation model that manifests close behavior with the target LLCM. We systematically investigate the effectiveness of launching imitation attacks under different query schemes and different LLCM tasks. We also design novel methods to polish the LLCM outputs, resulting in an effective imitation training process. We summarize our findings and provide lessons harvested in this study that can help better depict the attack surface of LLCMs. Our research contributes to the growing body of knowledge on imitation attacks and defenses in deep neural models, particularly in the domain of code related tasks.
翻译:在大型语言代码模型(LLCMs)方面最近取得的进展导致软件开发使用量的急剧激增,然而,众所周知,培训业绩良好的LLCM需要大量劳动力来收集数据和高质量的说明;此外,培训数据集可能是专有的(或部分向公众开放的来源),培训过程往往是在成本高昂的大型一组GPU上进行的。由于最近模仿攻击盗窃计算机视觉和自然语言模型的成功,这项工作引发了对LLCM的第一次模仿攻击:通过以精心设计的查询和收集产出向目标的LLCM公司查询并收集产出,对手可以培训一种模仿模型,表明与目标LLCM公司密切接触的行为。我们系统地调查在不同查询计划和不同LLCM任务下发起模拟攻击的效力。我们还设计了精炼LCMM产出的新方法,从而形成有效的仿照培训过程。我们总结了我们的调查结果,并提供了在这项研究中总结的经验教训,有助于更好地描述LCMMs的攻击面。我们的研究有助于不断增长的有关模仿攻击和防御领域模型的知识,特别是深层线性模型。</s>