Although large pre-trained models of code have delivered significant advancements in various code processing tasks, there is an impediment to the wide and fluent adoption of these powerful models in software developers' daily workflow: these large models consume hundreds of megabytes of memory and run slowly on personal devices, which causes problems in model deployment and greatly degrades the user experience. It motivates us to propose Compressor, a novel approach that can compress the pre-trained models of code into extremely small models with negligible performance sacrifice. Our proposed method formulates the design of tiny models as simplifying the pre-trained model architecture: searching for a significantly smaller model that follows an architectural design similar to the original pre-trained model. Compressor proposes a genetic algorithm (GA)-based strategy to guide the simplification process. Prior studies found that a model with higher computational cost tends to be more powerful. Inspired by this insight, the GA algorithm is designed to maximize a model's Giga floating-point operations (GFLOPs), an indicator of the model computational cost, to satisfy the constraint of the target model size. Then, we use the knowledge distillation technique to train the small model: unlabelled data is fed into the large model and the outputs are used as labels to train the small model. We evaluate Compressor with two state-of-the-art pre-trained models, i.e., CodeBERT and GraphCodeBERT, on two important tasks, i.e., vulnerability prediction and clone detection. We use our method to compress pre-trained models to a size (3 MB), which is 160$\times$ smaller than the original size. The results show that compressed CodeBERT and GraphCodeBERT are 4.31$\times$ and 4.15$\times$ faster than the original model at inference, respectively. More importantly, ...
翻译:尽管经过预先训练的大型代码模型在各种代码处理任务中取得了显著进步,但是,在软件开发商日常工作流程中广泛和流畅地采用这些强大的模型有碍于这些大型模型:这些大型模型消耗数百兆字节的记忆力,在个人设备上缓慢运行,这在模型部署方面造成了问题,并大大地降低了用户的经验。它激励我们提出压缩器,这是一种新颖的方法,可以将经过训练的代码模型压缩成极小的模型,其性能牺牲微小。我们提议的方法设计了小的模型,以简化经过训练的模型结构结构:寻找一个与原先经过事先训练的模型相似的更小得多的模型。 压缩器提议了一个基于基因算法(GAGA)来指导简化进程。 先前的研究发现,一个计算成本较高的模型往往更强大。 受此启发, Giga 浮点前的算法(GULOPs),一个计算成本的模型指标, 用来满足目标模型规模的制约。 然后,我们用知识蒸馏技术来训练小的模型, 用于大模型的模型的模型的模型: B 使用两个模型的模型显示的模型的模型的模型的模型显示的模型的模型, 使用。