Pre-trained models for programming languages have proven their significant values in various code-related tasks, such as code search, code clone detection, and code translation. Currently, most pre-trained models treat a code snippet as a sequence of tokens or only focus on the data flow between code identifiers. However, rich code syntax and hierarchy are ignored which can provide important structure information and semantic rules of codes to help enhance code representations. In addition, although the BERT-based code pre-trained models achieve high performance on many downstream tasks, the native derived sequence representations of BERT are proven to be of low-quality, it performs poorly on code matching and similarity tasks. To address these problems, we propose CLSEBERT, a Constrastive Learning Framework for Syntax Enhanced Code Pre-Trained Model, to deal with various code intelligence tasks. In the pre-training stage, we consider the code syntax and hierarchy contained in the Abstract Syntax Tree (AST) and leverage the constrastive learning to learn noise-invariant code representations. Besides the masked language modeling (MLM), we also introduce two novel pre-training objectives. One is to predict the edges between nodes in the abstract syntax tree, and the other is to predict the types of code tokens. Through extensive experiments on four code intelligence tasks, we successfully show the effectiveness of our proposed model.
翻译:预先培训的语言编程模式已经证明,在代码搜索、代码克隆检测和代码翻译等各种与代码有关的任务中,它们具有重要的价值。目前,大多数事先培训的模式将代码片段作为代号序列处理,或者只侧重于代码识别符之间的数据流。然而,丰富的代码语法和等级体系被忽视,而它们可以提供重要的结构信息和代码的语义规则,帮助加强代码表达。此外,尽管基于BERT的代码预培训模式在许多下游任务中取得了很高的绩效,但BERT的本地衍生序列表示证明质量低,在代码匹配和类似任务方面表现差。为了解决这些问题,我们建议CLSEBERT, 即“语法强化代码预译模式”的严格学习框架,用于处理各种代码智能任务。在培训前阶段,我们考虑“简易语法树”(AST)中包含的代码和排序模式,并运用“紧张学习”学习的代码演示。除了隐蔽语言模型(MLM)之外,我们还在代码匹配和类似任务方面表现得差。我们引入了两个新版本的节本前的代码。“谷前”的代码,其中的一个是成功的预测。我们从其他四级。“方向”的模型,要到其它的模型。