Signature-based malware detectors have proven to be insufficient as even a small change in malignant executable code can bypass these signature-based detectors. Many machine learning-based models have been proposed to efficiently detect a wide variety of malware. Many of these models are found to be susceptible to adversarial attacks - attacks that work by generating intentionally designed inputs that can force these models to misclassify. Our work aims to explore vulnerabilities in the current state of the art malware detectors to adversarial attacks. We train a Transformers-based malware detector, carry out adversarial attacks resulting in a misclassification rate of 23.9% and propose defenses that reduce this misclassification rate to half. An implementation of our work can be found at https://github.com/yashjakhotiya/Adversarial-Attacks-On-Transformers.
翻译:事实证明,基于签字的恶意软件检测器不够充分,因为即使对恶性可执行代码稍作改动,也能绕过这些基于签名的检测器。许多基于机器的学习模型都建议高效检测各种恶意软件。许多这些模型都容易受到对抗性攻击,这些攻击通过产生故意设计的投入而产生,从而迫使这些模型错误分类。我们的工作旨在探索当前状态的恶意软件检测器的脆弱性,以对抗性攻击为目的。我们训练了以变换者为基础的恶意软件检测器,进行了对抗性攻击,导致错误分类率23.9%,并提出了将这一错误分类率降低一半的防线。我们工作的完成情况见https://github.com/yashjakhatiya/Adversarial-Atacks-On-Transtrafects。