Graph neural networks (GNNs) have been utilized to create multi-layer graph models for a number of cybersecurity applications from fraud detection to software vulnerability analysis. Unfortunately, like traditional neural networks, GNNs also suffer from a lack of transparency, that is, it is challenging to interpret the model predictions. Prior works focused on specific factor explanations for a GNN model. In this work, we have designed and implemented Illuminati, a comprehensive and accurate explanation framework for cybersecurity applications using GNN models. Given a graph and a pre-trained GNN model, Illuminati is able to identify the important nodes, edges, and attributes that are contributing to the prediction while requiring no prior knowledge of GNN models. We evaluate Illuminati in two cybersecurity applications, i.e., code vulnerability detection and smart contract vulnerability detection. The experiments show that Illuminati achieves more accurate explanation results than state-of-the-art methods, specifically, 87.6% of subgraphs identified by Illuminati are able to retain their original prediction, an improvement of 10.3% over others at 77.3%. Furthermore, the explanation of Illuminati can be easily understood by the domain experts, suggesting the significant usefulness for the development of cybersecurity applications.
翻译:图神经网络(GNNs)已被应用于创建多层图模型的许多网络安全应用程序,从欺诈检测到软件漏洞分析。不幸的是,与传统神经网络一样,GNNs也存在着缺乏透明性的问题,即,很难解释模型的预测。以前的工作专注于GNN模型特定因素的解释。在这项工作中,我们设计和实现了Illuminati,一个使用GNN模型进行网络安全应用程序的全面和准确的解释框架。给定一个图和一个预训练的GNN模型,Illuminati能够识别有助于预测的重要节点、边缘和属性,同时不需要先前对GNN模型的了解。我们在两个网络安全应用程序中评估了Illuminati,即代码漏洞检测和智能合约漏洞检测。实验表明,Illuminati的解释结果比最先进的方法更准确,具体包括87.6%的Illuminati识别的子图能够保持其原始预测,而其他方法仅有77.3%,是提高了10.3%。此外,Illuminati的解释可以被领域专家轻松理解,这表明了其在网络安全应用程序开发中的重要用处。