There is a lack of scientific testing of commercially available malware detectors, especially those that boast accurate classification of never-before-seen (i.e., zero-day) files using machine learning (ML). The result is that the efficacy and gaps among the available approaches are opaque, inhibiting end users from making informed network security decisions and researchers from targeting gaps in current detectors. In this paper, we present a scientific evaluation of four market-leading malware detection tools to assist an organization with two primary questions: (Q1) To what extent do ML-based tools accurately classify never-before-seen files without sacrificing detection ability on known files? (Q2) Is it worth purchasing a network-level malware detector to complement host-based detection? We tested each tool against 3,536 total files (2,554 or 72% malicious, 982 or 28% benign) including over 400 zero-day malware, and tested with a variety of file types and protocols for delivery. We present statistical results on detection time and accuracy, consider complementary analysis (using multiple tools together), and provide two novel applications of a recent cost--benefit evaluation procedure by Iannaconne & Bridges that incorporates all the above metrics into a single quantifiable cost. While the ML-based tools are more effective at detecting zero-day files and executables, the signature-based tool may still be an overall better option. Both network-based tools provide substantial (simulated) savings when paired with either host tool, yet both show poor detection rates on protocols other than HTTP or SMTP. Our results show that all four tools have near-perfect precision but alarmingly low recall, especially on file types other than executables and office files -- 37% of malware tested, including all polyglot files, were undetected.
翻译:缺乏对商业上可获得的恶意软件检测器的科学测试,尤其是那些以机器学习(ML)为手段,准确分类了从未见过(即零日)的档案,而那些以机器学习(ML)为名的恶意软件检测器缺乏科学测试。其结果是,可用方法的功效和差距不透明,使终端用户无法做出知情的网络安全决定,研究人员也无法针对当前探测器的缺口做出知情的网络安全决定。在本文件中,我们对四个市场领先的恶意软件检测工具进行了科学评估,以协助一个组织处理两个主要问题:(Q1) 以ML为基础的工具在多大程度上准确分类了从未见过的文件而不牺牲已知文件的检测能力? (Q2) 是否值得购买一个网络级的恶意软件检测器? 我们测试了每个工具总共3 536个(2 554 % 或72%的恶意、982 % 或28%的良性), 包括400多个零天的恶意软件, 并且用各种文件和协议进行测试。 我们仍然提供基于检测时间和准确度的统计结果,考虑补充性分析(同时使用多种工具),并且提供两种新的最新成本评估程序的应用程序,在接近零点的服务器上, 而IM纳和桥的服务器显示整个工具则显示一个有效的前工具。