Cybersecurity is a domain where the data distribution is constantly changing with attackers exploring newer patterns to attack cyber infrastructure. Intrusion detection system is one of the important layers in cyber safety in today's world. Machine learning based network intrusion detection systems started showing effective results in recent years. With deep learning models, detection rates of network intrusion detection system are improved. More accurate the model, more the complexity and hence less the interpretability. Deep neural networks are complex and hard to interpret which makes difficult to use them in production as reasons behind their decisions are unknown. In this paper, we have used deep neural network for network intrusion detection and also proposed explainable AI framework to add transparency at every stage of machine learning pipeline. This is done by leveraging Explainable AI algorithms which focus on making ML models less of black boxes by providing explanations as to why a prediction is made. Explanations give us measurable factors as to what features influence the prediction of a cyberattack and to what degree. These explanations are generated from SHAP, LIME, Contrastive Explanations Method, ProtoDash and Boolean Decision Rules via Column Generation. We apply these approaches to NSL KDD dataset for intrusion detection system and demonstrate results.
翻译:入侵探测系统是当今世界网络安全的重要层面之一。 机器学习网络入侵探测系统近年来开始显示有效结果。 有了深层次的学习模型,网络入侵探测系统的探测率得到提高。 模型更准确,复杂性更高,因此解释性更小。 深神经网络复杂,难以解释,难以用于生产,因为其决定背后的原因不明。 在本文中,我们使用深神经网络探测网络入侵探测系统,还提出了可解释的AI框架,以增加机器学习管道每个阶段的透明度。 这样做的方法是利用可解释的AI算法,重点是通过解释为什么作出预测,使ML模型减少黑盒。 解释使我们可以衡量影响网络攻击预测的特征和程度。 这些解释来自SHAP、LIME、对比解释方法、ProtoDash和Boolean决定规则。 我们将这些方法应用到 NSLDD数据集, 以显示入侵探测结果。