Multilingual fine-tuning (of a multilingual Pre-trained Language Model) has shown to improve performance of downstream tasks. However, it was observed that different programming languages may have different structural properties, and thus the learning or fine-tuning of a model may be sub-optimal or even degrade the intended performance by using a multilingual dataset. In this study, we proposed a new modular component architecture, AdvFusion, that leverages the different aspects of programming languages for a target popular low-resource programming language, Ruby. Our result shows that AdvFusion can extract useful features from different programming languages efficiently, and it outperforms the existing state-of-the-art multilingual fine-tuning by 12% on the Code Summarization task.
翻译:暂无翻译