Deep learning has demonstrated its strengths in numerous binary analysis tasks, including function boundary detection, binary code search, function prototype inference, value set analysis, etc. When applying deep learning to binary analysis tasks, we need to decide what input should be fed into the neural network model. More specifically, we need to answer how to represent an instruction in a fixed-length vector. The idea of automatically learning instruction representations is intriguing, however the existing schemes fail to capture the unique characteristics of disassembly. These schemes ignore the complex intra-instruction structures and mainly rely on control flow in which the contextual information is noisy and can be influenced by compiler optimizations. In this paper, we propose to pre-train an assembly language model called PalmTree for generating general-purpose instruction embeddings by conducting self-supervised training on large-scale unlabeled binary corpora. PalmTree utilizes three pre-training tasks to capture various characteristics of assembly language. These training tasks overcome the problems in existing schemes, thus can help to generate high-quality representations. We conduct both intrinsic and extrinsic evaluations, and compare PalmTree with other instruction embedding schemes. PalmTree has the best performance for intrinsic metrics, and outperforms the other instruction embedding schemes for all downstream tasks.
翻译:深层次的学习展示了它在许多二进制分析任务中的优势,包括功能边界探测、二进制代码搜索、功能原型推断、价值组合分析等等。 在对二进制分析任务应用深度学习时,我们需要决定应将哪些投入输入神经网络模型。 更具体地说,我们需要回答如何在固定长度矢量中代表教学。 自动学习教学教学演示的理念令人感兴趣, 但是现有计划未能捕捉拆解语言的独特特征。 这些计划忽略了复杂的内部教学结构, 并主要依靠控制流, 即背景信息吵闹并可能受到编译者优化影响的控制流。 在本文中, 我们提议对一个名为PalmTree 的组装语言模型进行预培训, 用于生成通用指令嵌入, 进行大型无标签双进制的双进制囊体。 PalmTree利用三项培训前任务来捕捉到组装语言的各种特征。 这些培训任务克服了现有计划中的问题, 从而帮助生成高质量的演示。 我们为建立内部和外部版制式教学计划, 将所有内嵌入和下层指令都用于升级。