Pre-trained neural Language Models (PTLM), such as CodeBERT, are recently used in software engineering as models pre-trained on large source code corpora. Their knowledge is transferred to downstream tasks (e.g. code clone detection) via fine-tuning. In natural language processing (NLP), other alternatives for transferring the knowledge of PTLMs are explored through using adapters, compact, parameter efficient modules inserted in the layers of the PTLM. Although adapters are known to facilitate adapting to many downstream tasks compared to fine-tuning the model that require retraining all of the models' parameters -- which owes to the adapters' plug and play nature and being parameter efficient -- their usage in software engineering is not explored. Here, we explore the knowledge transfer using adapters and based on the Naturalness Hypothesis proposed by Hindle et. al \cite{hindle2016naturalness}. Thus, studying the bimodality of adapters for two tasks of cloze test and code clone detection, compared to their benchmarks from the CodeXGLUE platform. These adapters are trained using programming languages and are inserted in a PTLM that is pre-trained on English corpora (N-PTLM). Three programming languages, C/C++, Python, and Java, are studied along with extensive experiments on the best setup used for adapters. Improving the results of the N-PTLM confirms the success of the adapters in knowledge transfer to software engineering, which sometimes are in par with or exceed the results of a PTLM trained on source code; while being more efficient in terms of the number of parameters, memory usage, and inference time. Our results can open new directions to build smaller models for more software engineering tasks. We open source all the scripts and the trained adapters.
翻译:经过事先训练的神经语言模型(PTLM),例如 codeBERT,最近被作为软件工程模型用于软件工程,作为在大源代码公司进行预先训练的模型,这些模型的知识通过微调转移到下游任务(例如代码克隆检测),在自然语言处理(NLP)中,通过在PTLM层中插入的适应器、压缩、参数效率高的模块来探索传输PTLM知识的其他替代方法。虽然人们知道适应器有助于适应许多下游任务,而相对于需要再培训的所有模型参数 -- -- 这些模型的插座和播放性质以及参数的效率 -- -- 它们的软件工程应用没有被探索。在这里,我们利用适应器并基于Hindle等人提议的自然特性 Hypothesis, al\cite{hindledledle2016 自然特性。因此,研究较小型的适应器的双向下游调控测试和代码复制机,比代码LUDLUE平台的基准要好。这些调整的模型在编程中使用编程语言,而经过训练的C-BleverM 正在插入的C-C-L 学习中,在一种精化的C-C-C-C-C-C-C-C-L 学习的计算中,在使用中,在使用中,在使用最精制好的计算中,在使用中,并插入的计算中,在一种新的计算中,用于的计算中,在一种语言的计算中,在使用。