In this paper, we leverage low-level compiler intermediate representations (IR) to improve code translation. Traditional transpilers rely on syntactic information and handcrafted rules, which limits their applicability and produces unnatural-looking code. Applying neural machine translation (NMT) approaches to code has successfully broadened the set of programs on which one can get a natural-looking translation. However, they treat the code as sequences of text tokens, and still do not differentiate well enough between similar pieces of code which have different semantics in different languages. The consequence is low quality translation, reducing the practicality of NMT, and stressing the need for an approach significantly increasing its accuracy. Here we propose to augment code translation with IRs, specifically LLVM IR, with results on the C++, Java, Rust, and Go languages. Our method improves upon the state of the art for unsupervised code translation, increasing the number of correct translations by 11% on average, and up to 79% for the Java -> Rust pair with greedy decoding. With beam search, it increases the number of correct translations by 5.5% in average. We extend previous test sets for code translation, by adding hundreds of Go and Rust functions. Additionally, we train models with high performance on the problem of IR decompilation, generating programming source code from IR, and study using IRs as intermediary pivot for translation.
翻译:在本文中, 我们利用低水平的编译者中间演示来改进代码翻译。 传统转译者依靠综合信息和手工艺规则来改进代码翻译。 传统转译者依靠综合信息和手工艺规则, 从而限制其适用性, 并产生异常的代码。 应用神经机器翻译( NMT) 方法对代码成功地扩展了一套程序, 人们可以从中获得自然外观翻译。 但是, 他们将代码作为文本符号的序列处理, 但仍然没有足够区分于不同语言有不同语义的类似代码。 其结果是, 翻译质量低, 降低了NMT的实用性, 并强调了方法的准确性能。 我们在这里建议增加与IR的代码翻译, 特别是LLLVM IR, 在 C++、 Java、 Rust 和 Go 语言上的结果。 我们的方法改进了不受监管的代码翻译的艺术状态, 平均将正确的翻译数量增加11%, 在 Java - > R 具有贪婪解码的 R 配对的 R 。, 的翻译数量增加5.5 %, 在平均运行中, 我们用高级的 R 的测试 的代码 。 我们用前的代码 将I 的代码 更新 。 更新为高级的代码 。