Back-translation is widely known for its effectiveness in neural machine translation when there is little to no parallel data. In this approach, a source-to-target model is coupled with a target-to-source model trained in parallel. The target-to-source model generates noisy sources, while the source-to-target model is trained to reconstruct the targets and vice versa. Recent developments of multilingual pre-trained sequence-to-sequence models for programming languages have been very effective for a broad spectrum of downstream software engineering tasks. Hence, training them to build programming language translation systems via back-translation is compelling. However, these models cannot be further trained via back-translation since they learn to output sequences in the same language as the inputs during pre-training. As an alternative, we propose performing back-translation via code summarization and generation. In code summarization, a model learns to generate natural language (NL) summaries given code snippets. In code generation, the model learns to do the opposite. Therefore, target-to-source generation in back-translation can be viewed as a target-to-NL-to-source generation. We show that our proposed approach performs competitively with state-of-the-art methods. We have made the code publicly available.
翻译:在几乎没有平行数据的情况下,在神经机翻译方面,其效力是众所周知的。在这种方法中,源到目标模型与平行培训的目标到源模式相结合。目标到源模式产生噪音源,而源到源模式则培训重建目标,反之亦然。多语言的编程语言预先训练序列到后果模型的最新发展对于一系列广泛的下游软件工程任务非常有效。因此,培训它们通过回译建立语言翻译系统十分必要。然而,这些模型无法通过回译进一步培训,因为它们学习与培训前的投入使用同一语言的输出序列。作为一种替代办法,我们建议通过代码总和和生成进行回译。在代码拼凑中,一个模型学会生成自然语言摘要给定了代码夹。在代码生成中,模型学习了相反的方法。因此,在后译过程中的目标到源生成可以被视为目标到N-L方法,因此,这些模型无法通过回译得到进一步的培训,因为它们在培训前以相同的语言进行输出序列。我们建议通过代码的拼凑方式进行回译。我们展示了现在有竞争力的版本。