Pretrained Transformers achieve state-of-the-art performance in various code-processing tasks but may be too large to be deployed. As software development tools often incorporate modules for various purposes which may potentially use a single instance of the pretrained model, it appears relevant to utilize parameter-efficient fine-tuning for the pretrained models of code. In this work, we test two widely used approaches, adapters and LoRA, which were initially tested on NLP tasks, on four code-processing tasks. We find that though the efficient fine-tuning approaches may achieve comparable or higher performance than the standard, full, fine-tuning in code understanding tasks, they underperform full fine-tuning in code-generative tasks. These results underline the importance of testing efficient fine-tuning approaches on other domains than NLP and motivate future research in efficient fine-tuning for source code.
翻译:在这项工作中,我们测试了两种广泛使用的方法,即适应器和LORA,这些方法最初根据NLP任务进行测试;在四种代码处理任务上,我们发现,虽然高效的微调方法可能比标准、完整和微调的代码理解任务取得类似或更高的性能,但它们在代码生成任务方面没有达到完全的微调,这些结果突出表明了在非NLP领域测试高效的微调方法的重要性,并激励今后在源代码的高效微调方面开展研究。