Sequence-to-sequence models have been used to transform erroneous programs into correct ones when trained with a large enough dataset. Some recent studies also demonstrated strong empirical evidence that code review (natural language instruction about suggestive changes in code) can improve the program repair further. Large language models, trained with Natural Language (NL) and computer program corpora, have the capacity to contain inherent knowledge of both. In this study, we investigate if this inherent knowledge of code and NL can be utilized to improve automated program repair. We applied PLBART and CodeT5, two state-of-the-art language models that are pre-trained with both Programming Language (PL) and Natural Language (NL), on two such natural language-based program repair datasets and found that the pre-trained language models fine-tuned with datasets containing both code review and subsequent code changes notably outperform each of the previous models. We observed that the pre-trained models improve the previously best-reported results by 9.91% on the Review4Repair dataset and by 24.72% on the dataset by Tufano et al. This suggests that a pre-trained sequential model has a better understanding of natural language and can utilize it much better. We performed an ablation study to assess the contribution of the pre-training mechanism and the model architecture. We found that pre-training was significantly more important in the performance gain than the model architecture. The practical application of using pre-trained transformer models in the context of automated program repair is still a long way off. However, our study demonstrates the substantial value of employing pre-trained models, paving the path for future studies to use more of these.
翻译:序列到序列模型已被用于将出错的程序转换为正确的程序,在足够大的数据集下进行训练。最近的一些研究还表明,代码审查(有关如何在代码中进行推荐更改的自然语言指令)可以进一步改进程序修复。用自然语言(NL)和计算机程序语料库对训练进行预处理的大型语言模型具有同时包含两种知识的能力。在本研究中,我们研究了这种程序和自然语言固有知识能否用于改进自动化程序修复。我们将PLBART和CodeT5两种最先进的语言模型应用于两个基于自然语言的程序修复数据集,并发现预先训练的语言模型在包含代码审查和随后代码更改的数据集上进行了微调,显著优于之前的模型。我们观察到,与以往最佳报告的结果相比,预先训练的模型在Review4Repair数据集上提高了9.91%,在Tufano等人的数据集上提高了24.72%。这表明预先训练的顺序模型对自然语言有更好的理解,并能够更好地利用它。我们进行了消融研究,以评估预训练机制和模型架构的贡献。我们发现,预训练在性能增益中的重要性显着高于模型架构。在自动化程序修复的实际应用方面仍有很长的路要走。但是,我们的研究展示了应用预训练模型的巨大价值,为未来的研究使用更多这类模型铺平了道路。