The problem of reversing the compilation process, decompilation, is an important tool in reverse engineering of computer software. Recently, researchers have proposed using techniques from neural machine translation to automate the process in decompilation. Although such techniques hold the promise of targeting a wider range of source and assembly languages, to date they have primarily targeted C code. In this paper we argue that existing neural decompilers have achieved higher accuracy at the cost of requiring language-specific domain knowledge such as tokenizers and parsers to build an abstract syntax tree (AST) for the source language, which increases the overhead of supporting new languages. We explore a different tradeoff that, to the extent possible, treats the assembly and source languages as plain text, and show that this allows us to build a decompiler that is easily retargetable to new languages. We evaluate our prototype decompiler, Beyond The C (BTC), on Go, Fortran, OCaml, and C, and examine the impact of parameters such as tokenization and training data selection on the quality of decompilation, finding that it achieves comparable decompilation results to prior work in neural decompilation with significantly less domain knowledge. We will release our training data, trained decompilation models, and code to help encourage future research into language-agnostic decompilation.
翻译:颠倒编译过程、脱解问题,是计算机软件逆向工程的一个重要工具。最近,研究人员提议使用神经机转换技术,使程序自动脱解。虽然这些技术有望针对更广泛的源语言和组装语言,但迄今为止主要针对C代码。在本文中,我们争辩说,现有的神经分解器实现了更高的准确性,其代价是需要特定语言的域知识,如象征器和剖析器等,为源语言建立一个抽象的合成树(AST),这增加了支持新语言的间接费用。我们探索了一种不同的交易,尽可能将组装和源语言作为纯文本对待,并表明这使我们能够建立一个容易重新瞄准新语言的解译器。我们评估了我们的原型脱混器,即Go、Fortran、OCaml和C,以及C, 并研究了标识和培训数据选择等参数对解译语言质量的影响。我们探索了一种不同的交换方法,即尽可能将组装和源语言作为纯文本对待,并表明这使我们能够建立一个容易重新瞄准新语言的解译的解译器。我们之前的编成数据模型,我们评估我们的原型的解译工作将大大地研究结果。