Artificial Intelligence (AI) systems have been increasingly used to make decision-making processes faster, more accurate, and more efficient. However, such systems are also at constant risk of being attacked. While the majority of attacks targeting AI-based applications aim to manipulate classifiers or training data and alter the output of an AI model, recently proposed Sponge Attacks against AI models aim to impede the classifier's execution by consuming substantial resources. In this work, we propose \textit{Dual Denial of Decision (DDoD) attacks against collaborative Human-AI teams}. We discuss how such attacks aim to deplete \textit{both computational and human} resources, and significantly impair decision-making capabilities. We describe DDoD on human and computational resources and present potential risk scenarios in a series of exemplary domains.
翻译:人工智能(AI)系统越来越多地被用来使决策过程更快、更准确和更有效,然而,这种系统也经常面临受到攻击的风险。虽然大多数针对基于AI的应用程序的袭击旨在操纵分类或培训数据,并改变AI模型的产出,但最近提出的对AI模型的海绵袭击旨在通过消耗大量资源来阻碍分类执行。在这项工作中,我们提议对合作的人类-AI团队进行 text {DDoD}攻击。我们讨论这些攻击的目的是如何耗尽计算和人力资源,并严重削弱决策能力。我们描述了关于人力和计算资源的DDoD,并在一系列示范领域提出潜在风险设想。