Deep learning (DL) models of code have recently reported great progress for vulnerability detection. In some cases, DL-based models have outperformed static analysis tools. Although many great models have been proposed, we do not yet have a good understanding of these models. This limits the further advancement of model robustness, debugging, and deployment for the vulnerability detection. In this paper, we surveyed and reproduced 9 state-of-the-art (SOTA) deep learning models on 2 widely used vulnerability detection datasets: Devign and MSR. We investigated 6 research questions in three areas, namely model capabilities, training data, and model interpretation. We experimentally demonstrated the variability between different runs of a model and the low agreement among different models' outputs. We investigated models trained for specific types of vulnerabilities compared to a model that is trained on all the vulnerabilities at once. We explored the types of programs DL may consider "hard" to handle. We investigated the relations of training data sizes and training data composition with model performance. Finally, we studied model interpretations and analyzed important features that the models used to make predictions. We believe that our findings can help better understand model results, provide guidance on preparing training data, and improve the robustness of the models. All of our datasets, code, and results are available at https://figshare.com/s/284abfba67dba448fdc2.
翻译:深入学习(DL)的代码模型最近报告说在识别脆弱性方面取得了很大进展。在某些情况下,基于DL的模型比静态分析工具要好得多。虽然提出了许多伟大的模型,但我们还没有很好地理解这些模型。这限制了模型稳健性、调试和脆弱性检测部署的进一步发展。在本文中,我们调查并复制了两个广泛使用的脆弱性检测数据集(Devign和MSR)的9个最先进的(SOTA)深层次学习模型。我们调查了三个领域的6个研究问题,即模型能力、培训数据和模型解释。我们实验性地展示了一个模型的不同运行和不同模型产出之间的低一致性。我们调查了针对特定类型脆弱性培训的模式,而一个同时对所有脆弱性培训的模型进行了调查。我们探讨了DL可能认为“难以”处理的方案类型。我们研究了培训数据大小和培训数据构成与模型绩效之间的关系。我们研究了模型解释和分析了用来进行预测的重要特征。我们相信,我们的调查结果有助于更好地了解各种模型的可靠模型,4 并且提供了在数据库/BADLA数据库上编制数据的结果。