Code summarization and generation empower conversion between programming language (PL) and natural language (NL), while code translation avails the migration of legacy code from one PL to another. This paper introduces PLBART, a sequence-to-sequence model capable of performing a broad spectrum of program and language understanding and generation tasks. PLBART is pre-trained on an extensive collection of Java and Python functions and associated NL text via denoising autoencoding. Experiments on language generation tasks, including code summarization, generation, translation in seven programming languages show that PLBART outperforms or rivals state-of-the-art models. Moreover, experiments on discriminative tasks, e.g., program repair, clone detection, and vulnerable code detection demonstrate PLBART's effectiveness in program understanding. Furthermore, analysis reveals that PLBART learns program syntax, style (e.g., identifier naming convention), logical flow (e.g., if block inside an else block is equivalent to else if block) that are crucial to program semantics and thus excels even with limited annotations.
翻译:代码总和和生成赋予了编程语言(PL)和自然语言(NL)之间的转换能力,而代码翻译则利用了遗留代码从一个PL向另一个PL的迁移。本文介绍了PLBART,这是一个能够执行范围广泛的程序、语言理解和生成任务的序列到序列模式。PLBART通过解密自动编码,对大量爪哇和Python函数和相关NL文本进行了预先培训。语言生成任务实验,包括代码总和、生成、七种编程语言翻译,显示PLBART超越或对手最先进的模式。此外,关于歧视任务,例如程序修理、克隆探测和脆弱代码探测的实验显示了PLLBART在方案理解方案方面的有效性。此外,分析显示,PLBART学习程序语法、风格(例如标识命名会议)、逻辑流(例如,如果在另一块内形成障碍时相当于其他障碍的话),对于编程的语义至关重要,因此甚至具有有限的说明性。