With the great success of pre-trained models, the pretrain-then-finetune paradigm has been widely adopted on downstream tasks for source code understanding. However, compared to costly training a large-scale model from scratch, how to effectively adapt pre-trained models to a new task has not been fully explored. In this paper, we propose an approach to bridge pre-trained models and code-related tasks. We exploit semantic-preserving transformation to enrich downstream data diversity, and help pre-trained models learn semantic features invariant to these semantically equivalent transformations. Further, we introduce curriculum learning to organize the transformed data in an easy-to-hard manner to fine-tune existing pre-trained models. We apply our approach to a range of pre-trained models, and they significantly outperform the state-of-the-art models on tasks for source code understanding, such as algorithm classification, code clone detection, and code search. Our experiments even show that without heavy pre-training on code data, natural language pre-trained model RoBERTa fine-tuned with our lightweight approach could outperform or rival existing code pre-trained models fine-tuned on the above tasks, such as CodeBERT and GraphCodeBERT. This finding suggests that there is still much room for improvement in code pre-trained models.
翻译:在经过培训的模型取得巨大成功之后,在对源码理解的下游任务方面,先入为主的、现成的纤维内网范式已被广泛采用。然而,与费用高昂的培训相比,从零开始,大规模模型从零开始,没有全面探讨如何有效地将经过培训的模型适应到新的任务。在本文件中,我们提出了将经过培训的模型和与代码有关的任务连接起来的办法。我们利用语义保留转换来丰富下游数据的多样性,并帮助经过培训的模型学习不适应这些等同的语义变异的语义特征。此外,我们引入课程学习,以容易到硬的方式组织改造的数据,以微调现有的经过培训的模型。我们将我们的方法应用于一系列经过培训之前的模型,这些模型大大超越了用于理解源码的先进模型,例如算法分类、代码克隆探测和代码搜索。我们的实验甚至表明,如果不对这些代码进行严格的预先培训,那么经过培训的RoBERTA模型与我们的轻体重方法相调整后,就可以将已经改变过的数据组织起来,从而改进现有的代码前的模型成为了。