Explainable Artificial Intelligence (XAI) is a promising solution to improve the transparency of machine learning (ML) pipelines. We systematize the increasingly growing (but fragmented) microcosm of studies that develop and utilize XAI methods for defensive and offensive cybersecurity tasks. We identify 3 cybersecurity stakeholders, i.e., model users, designers, and adversaries, that utilize XAI for 5 different objectives within an ML pipeline, namely 1) XAI-enabled decision support, 2) applied XAI for security tasks, 3) model verification via XAI, 4) explanation verification & robustness, and 5) offensive use of explanations. We further classify the literature w.r.t. the targeted security domain. Our analysis of the literature indicates that many of the XAI applications are designed with little understanding of how they might be integrated into analyst workflows -- user studies for explanation evaluation are conducted in only 14% of the cases. The literature also rarely disentangles the role of the various stakeholders. Particularly, the role of the model designer is minimized within the security literature. To this end, we present an illustrative use case accentuating the role of model designers. We demonstrate cases where XAI can help in model verification and cases where it may lead to erroneous conclusions instead. The systematization and use case enable us to challenge several assumptions and present open problems that can help shape the future of XAI within cybersecurity
翻译:解释性人工智能(XAI)是提高机器学习(ML)管道透明度的一个大有希望的解决办法。我们将日益增长(但零散)的研究缩微胶系统化,为防御性和攻击性网络安全任务开发和使用XAI方法。我们确定3个网络安全利益攸关方,即模型用户、设计师和对手,为ML管道中的5个不同目标使用XAI,即:1) XAI辅助决策支持,2)应用XAI安全任务,3)通过XAI进行示范核查,4)通过XAI进行示范核查,4)核查和稳健性,5)攻击性地使用解释。我们进一步对目标安全领域的文献进行分类。我们对文献的分析表明,许多XAI应用的设计对如何将其纳入分析工作流程知之甚少 -- 用户对解释性评价的研究只对14%的案件进行。文献也很少区分各种利益攸关方的作用。特别是,示范设计者的作用在安全文献中被最小化。为此,我们提出一个示例,说明如何突出示范设计师的作用。我们用一些案例来证明,XAI的模型和将来的模型可以帮助得出一些案例。