Machine learning (ML) on graph-structured data has recently received deepened interest in the context of intrusion detection in the cybersecurity domain. Due to the increasing amounts of data generated by monitoring tools as well as more and more sophisticated attacks, these ML methods are gaining traction. Knowledge graphs and their corresponding learning techniques such as Graph Neural Networks (GNNs) with their ability to seamlessly integrate data from multiple domains using human-understandable vocabularies, are finding application in the cybersecurity domain. However, similar to other connectionist models, GNNs are lacking transparency in their decision making. This is especially important as there tend to be a high number of false positive alerts in the cybersecurity domain, such that triage needs to be done by domain experts, requiring a lot of man power. Therefore, we are addressing Explainable AI (XAI) for GNNs to enhance trust management by exploring combining symbolic and sub-symbolic methods in the area of cybersecurity that incorporate domain knowledge. We experimented with this approach by generating explanations in an industrial demonstrator system. The proposed method is shown to produce intuitive explanations for alerts for a diverse range of scenarios. Not only do the explanations provide deeper insights into the alerts, but they also lead to a reduction of false positive alerts by 66% and by 93% when including the fidelity metric.
翻译:在图表结构数据上,机器学习最近在网络安全领域入侵探测方面受到更深的兴趣。由于监测工具产生的数据数量越来越多,以及越来越复杂的攻击,这些ML方法正在获得牵引力。知识图表及其相应的学习技术,如图形神经网络(GNNS),它们有能力利用人能理解的词汇将多个领域的数据无缝地整合在一起,从而在网络安全领域找到应用。然而,与其他连接模型一样,GNNS在决策方面缺乏透明度。这特别重要,因为在网络安全领域往往有大量虚假的正面警报,因此,需要由域专家进行三角分析,需要大量的人力力量。因此,我们正在探讨GNNNS在网络领域将包含域知识的象征性和次声学方法结合起来,以加强信任管理。我们与其他连接模型一样,GNNNS在决策方面缺乏透明度。由于网络安全领域往往有大量虚假的正面警报,因此这特别重要,因为网络安全领域需要由域专家进行三角分析,因此需要由域专家进行三角性解释。我们正在研究GAI(XAI),通过探索将包含域知识的象征性和次声学方法,我们通过在工业示范系统作出解释,而进行这种实验。提议的方法显示它们对于深度的精确的预警的精确解释,而不能提供更深的精确的精确的精确的精确地解释。