A rising number of botnet families have been successfully detected using deep learning architectures. While the variety of attacks increases, these architectures should become more robust against attacks. They have been proven to be very sensitive to small but well constructed perturbations in the input. Botnet detection requires extremely low false-positive rates (FPR), which are not commonly attainable in contemporary deep learning. Attackers try to increase the FPRs by making poisoned samples. The majority of recent research has focused on the use of model loss functions to build adversarial examples and robust models. In this paper, two LSTM-based classification algorithms for botnet classification with an accuracy higher than 98\% are presented. Then, the adversarial attack is proposed, which reduces the accuracy to about30\%. Then, by examining the methods for computing the uncertainty, the defense method is proposed to increase the accuracy to about 70\%. By using the deep ensemble and stochastic weight averaging quantification methods it has been investigated the uncertainty of the accuracy in the proposed methods.
翻译:利用深层的学习结构,成功检测到了越来越多的肉网家庭。虽然袭击种类增多,但这些结构应该更加强大,以抵御袭击。这些结构已经证明对输入中的小但结构完善的扰动非常敏感。 植物网的检测要求极低的假阳率(FPR),这是当代深层学习中通常无法实现的。 攻击者试图通过制作有毒样本来增加FPR。 最近的大多数研究侧重于使用模型损失功能来建立对抗性实例和强力模型。 本文介绍了基于LSTM的两种基于肉网分类的分类算法,其精确度高于98 ⁇ 。 然后,提出了对抗性攻击建议,将准确性降低到约30 ⁇ 。 然后,通过研究不确定性的计算方法,提出了防御方法,将准确性提高到约70 ⁇ 。 通过使用深元素和沙集力的计算法,它调查了拟议方法的准确性不确定性。