Deep learning (DL) models of code have recently reported great progress for vulnerability detection. In some cases, DL-based models have outperformed static analysis tools. Although many great models have been proposed, we do not yet have a good understanding of these models. This limits the further advancement of model robustness, debugging, and deployment for the vulnerability detection. In this paper, we surveyed and reproduced 9 state-of-the-art (SOTA) deep learning models on 2 widely used vulnerability detection datasets: Devign and MSR. We investigated 6 research questions in three areas, namely model capabilities, training data, and model interpretation. We experimentally demonstrated the variability between different runs of a model and the low agreement among different models' outputs. We investigated models trained for specific types of vulnerabilities compared to a model that is trained on all the vulnerabilities at once. We explored the types of programs DL may consider "hard" to handle. We investigated the relations of training data sizes and training data composition with model performance. Finally, we studied model interpretations and analyzed important features that the models used to make predictions. We believe that our findings can help better understand model results, provide guidance on preparing training data, and improve the robustness of the models. All of our datasets, code, and results are available at https://doi.org/10.6084/m9.figshare.20791240.
翻译:深入学习(DL)的代码模型最近报告说,在识别脆弱性方面已经取得了很大进展。在某些情况下,基于DL的模型比静态分析工具要好得多。虽然提出了许多伟大的模型,但我们对这些模型还没有很好地理解。这限制了模型稳健性、调试和脆弱性检测部署的进一步发展。在本文中,我们调查并复制了9个广泛使用的脆弱性检测数据库(Devign和MSR)的深度学习模型。我们调查了三个领域的6个研究问题,即模型能力、培训数据和模型解释。我们实验性地展示了不同模型运行和不同模型产出之间协议的变异性。我们调查了针对特定类型脆弱性培训的模式,而现在对所有脆弱性培训的模式则进行了一次培训。我们探讨了DL可能认为“难以”处理的方案类型。我们研究了培训数据大小和培训数据构成与模型性能的关系。我们研究了模型解释和分析了用来进行预测的重要特征。我们认为,我们的调查结果可以帮助更好地了解具体类型的脆弱性模型,提供了用于编制数据模型的模型。MLADR/208。