Applying machine learning to tasks that operate with code changes requires their numerical representation. In this work, we propose an approach for obtaining such representations during pre-training and evaluate them on two different downstream tasks - applying changes to code and commit message generation. During pre-training, the model learns to apply the given code change in a correct way. This task requires only code changes themselves, which makes it unsupervised. In the task of applying code changes, our model outperforms baseline models by 5.9 percentage points in accuracy. As for the commit message generation, our model demonstrated the same results as supervised models trained for this specific task, which indicates that it can encode code changes well and can be improved in the future by pre-training on a larger dataset of easily gathered code changes.
翻译:应用机器学习操作代码更改的任务需要他们的数字代表。 在这项工作中,我们提出一种方法,在培训前获得这种表述,并评估两种不同的下游任务----对代码进行修改并承诺生成信息。在培训前,模型学会正确应用给定代码变化。这一任务只需要代码本身进行修改,从而使它不受监督。在应用代码变化的任务中,我们的模型在准确性方面比基线模型高出5.9个百分点。在承诺生成信息方面,我们的模型展示的结果与为这一具体任务培训的监督模型相同,这表明它能够很好地将代码变化编码化,并且可以通过对易于收集的代码变化进行更大数据集的预先培训而在今后加以改进。