With the availability of large datasets and ever-increasing computing power, there has been a growing use of data-driven artificial intelligence systems, which have shown their potential for successful application in diverse areas. However, many of these systems are not able to provide information about the rationale behind their decisions to their users. Lack of understanding of such decisions can be a major drawback, especially in critical domains such as those related to cybersecurity. In light of this problem, in this paper we make three contributions: (i) proposal and discussion of desiderata for the explanation of outputs generated by AI-based cybersecurity systems; (ii) a comparative analysis of approaches in the literature on Explainable Artificial Intelligence (XAI) under the lens of both our desiderata and further dimensions that are typically used for examining XAI approaches; and (iii) a general architecture that can serve as a roadmap for guiding research efforts towards the development of explainable AI-based cybersecurity systems -- at its core, this roadmap proposes combinations of several research lines in a novel way towards tackling the unique challenges that arise in this context.
翻译:随着大量数据集的可用性和日益增强的计算能力,数据驱动的人工情报系统日益得到使用,这些系统表明它们有可能在各个领域成功应用,然而,许多这些系统无法向用户提供关于其决定所依据的理由的信息。对这些决定缺乏了解可能是一个重大缺陷,特别是在诸如与网络安全有关的关键领域。鉴于这一问题,我们在本文件中作出了三项贡献:(一) 提议和讨论对基于AI的网络安全系统产出的解释的贬损;(二) 比较分析关于可解释的人工情报(XAI)的文献中的各种办法,从我们的侧面和通常用于审查XAI方法的更多方面来看;以及(三) 一个可以作为指导开发可解释的AI的网络安全系统的研究工作的路线图 -- -- 其核心 -- --,这一路线图提出了若干研究系列的组合,以新的方式处理这一背景下出现的独特挑战。