Back-translation is widely known for its effectiveness for neural machine translation when little to no parallel data is available. In this approach, a source-to-target model is coupled with a target-to-source model trained in parallel. The target-to-source model generates noisy sources, while the source-to-target model is trained to reconstruct the targets and vice versa. Recent developments of multilingual pre-trained sequence-to-sequence models for programming languages have been very effective for a broad spectrum of downstream software engineering tasks. Hence, it is compelling to train them to build programming language translation systems via back-translation. However, these models cannot be further trained via back-translation since they learn to output sequences in the same language as the inputs during pre-training. As an alternative, we propose performing back-translation via code summarization and generation. In code summarization, a model learns to generate natural language (NL) summaries given code snippets. In code generation, the model learns to do the opposite. Therefore, target-to-source generation in back-translation can be viewed as target-to-NL-to-source generation. We show that our proposed approach performs competitively with state-of-the-art methods.
翻译:在几乎没有到没有平行数据的情况下,神经机翻译的实效被广为人知。在这个方法中,源到目标模型与平行培训的目标到源模式同时使用。目标到源模式产生噪音的来源,而源到目标模型则培训重建目标,反之亦然。多语言的编程语言预先训练序列到后果模型的最新发展对于一系列广泛的下游软件工程任务非常有效。因此,必须训练它们通过回译建立编程语言翻译系统。然而,这些模型无法通过回译进一步培训,因为它们学习与培训前的投入使用同一语言的输出序列。作为一种替代办法,我们建议通过代码拼凑和生成来进行回译。在代码拼凑中,一个模型学会生成自然语言(NL)摘要给定了代码片断。在代码生成中,模型学习了相反的方法。因此,在后译过程中的目标到源生成可以被视为目标到NL-源方法。我们用竞争性的生成方法展示了我们的拟议状态。