Parameter Efficient Fine-Tuning (PEFT) is an alternate choice to full fine-tuning a language model. Though PEFT methods are used in natural language domain widely, there are limited studies on using PEFT for language models that are pre-trained on code and comment datasets (i.e., code-LMs). Previous research has also shown that code summarization, a task that intends to generate natural description of the given code snippet automatically and is known to benefit the program comprehension, benefits from multilingual fine-tuning approach. In multilingual fine-tuning, the code-LM is fine-tuned on a dataset consisting of different programming languages. AdapterFusion is a specific PEFT approach that aims to extract and compose the latent knowledge from multiple (language) adapters for a downstream task. However, our experiments reveal that the AdapterFusion still learns from the same language, not taking advantage of other programming languages. Therefore, we change the architecture and propose AdvFusion, a PEFT approach that enforces the model to first learn from other programming languages, and then pay attention to the language of the target task. Therefore, the AdvFusion emphasizes the knowledge transfer among different programming languages, as stated in the multilingual fine-tuning. Our results on the CodeSearchNet dataset using two code-LMs, show that Adapters, AdapterFusion, and our proposed AdvFusion can achieve results on-par with or higher than the full fine-tuning models for code summarization and method name prediction. Notably, the number of trainable parameters are 123x less and the training time is reduced by ~30%. AdvFusion exhibits a notable enhancement compared to AdapterFusion, showcasing a 0.9 to 1.7-point increase in BLEU-4 scores specifically for Ruby, JavaScript, and Go.
翻译:暂无翻译