Vulnerability identification is crucial for cyber security in the software-related industry. Early identification methods require significant manual efforts in crafting features or annotating vulnerable code. Although the recent pre-trained models alleviate this issue, they overlook the multiple rich structural information contained in the code itself. In this paper, we propose a novel Multi-View Pre-Trained Model (MV-PTM) that encodes both sequential and multi-type structural information of the source code and uses contrastive learning to enhance code representations. The experiments conducted on two public datasets demonstrate the superiority of MV-PTM. In particular, MV-PTM improves GraphCodeBERT by 3.36\% on average in terms of F1 score.
翻译:早期识别方法需要在设计特征或说明脆弱代码方面作出大量人工努力。虽然最近经过培训的模型缓解了这一问题,但忽视了代码本身中包含的多种丰富的结构信息。在本文件中,我们提出了一个新的多视图预培训模型(MV-PTM),该模型将源代码的顺序和多类型结构信息编码起来,并利用对比学习加强代码表达。对两个公共数据集进行的实验显示了MV-PTM的优越性。特别是,MV-PTM按F1得分平均将GiapCodeBERT 改善3.36。