Programming languages can benefit from one another by utilizing a pre-trained model for software engineering tasks such as code summarization and method name prediction. While full fine-tuning of Code Language Models (Code-LMs) has been explored for multilingual knowledge transfer, research on Parameter Efficient Fine-Tuning (PEFT) for this purpose is limited. AdapterFusion, a PEFT architecture, aims to enhance task performance by leveraging information from multiple languages but primarily focuses on the target language. To address this, we propose AdvFusion, a novel PEFT-based approach that effectively learns from other languages before adapting to the target task. Evaluated on code summarization and method name prediction, AdvFusion outperforms AdapterFusion by up to 1.7 points and surpasses LoRA with gains of 1.99, 1.26, and 2.16 for Ruby, JavaScript, and Go, respectively. We open-source our scripts for replication purposes.
翻译:暂无翻译