Gartner, a large research and advisory company, anticipates that by 2024 80% of security operation centers (SOCs) will use machine learning (ML) based solutions to enhance their operations. In light of such widespread adoption, it is vital for the research community to identify and address usability concerns. This work presents the results of the first in situ usability assessment of ML-based tools. With the support of the US Navy, we leveraged the national cyber range, a large, air-gapped cyber testbed equipped with state-of-the-art network and user emulation capabilities, to study six US Naval SOC analysts' usage of two tools. Our analysis identified several serious usability issues, including multiple violations of established usability heuristics form user interface design. We also discovered that analysts lacked a clear mental model of how these tools generate scores, resulting in mistrust and/or misuse of the tools themselves. Surprisingly, we found no correlation between analysts' level of education or years of experience and their performance with either tool, suggesting that other factors such as prior background knowledge or personality play a significant role in ML-based tool usage. Our findings demonstrate that ML-based security tool vendors must put a renewed focus on working with analysts, both experienced and inexperienced, to ensure that their systems are usable and useful in real-world security operations settings.
翻译:Gartner是一家大型研究和咨询公司,它预计到2024年,80%的安全操作中心(SOCs)将使用机械学习(ML)的解决方案来加强其运作。鉴于这种广泛采用,研究界必须查明和解决可用性问题。这项工作介绍了对ML工具进行首次就地可用性评估的结果。在美国海军的支持下,我们利用了国家网络范围,一个配备了最新网络和用户模拟能力的大型空控网络测试台,研究美国海军SOC分析师使用两种工具的情况。我们的分析查明了若干严重的可用性问题,包括多种违反既定易用性超常格式用户界面设计的行为。我们还发现,分析家缺乏关于这些工具如何产生得分的明确精神模型,导致不信任和/或滥用工具本身。令人惊讶的是,我们发现分析师的教育水平或经验年,以及他们与这两种工具的性能之间没有关联,表明诸如以前的背景知识或个性能分析师在以ML为基地的操作中扮演重要角色的其他因素。我们的研究结果表明,ML系统必须将其最新的安全性工具作为工具的焦点,在使用中。我们的研究结论显示,ML基础的系统必须保证在以实际的安全性工具分析器中。