Understanding the actions of both humans and artificial intelligence (AI) agents is important before modern AI systems can be fully integrated into our daily life. In this paper, we show that, despite their current huge success, deep learning based AI systems can be easily fooled by subtle adversarial noise to misinterpret the intention of an action in interaction scenarios. Based on a case study of skeleton-based human interactions, we propose a novel adversarial attack on interactions, and demonstrate how DNN-based interaction models can be tricked to predict the participants' reactions in unexpected ways. From a broader perspective, the scope of our proposed attack method is not confined to problems related to skeleton data but can also be extended to any type of problems involving sequential regressions. Our study highlights potential risks in the interaction loop with AI and humans, which need to be carefully addressed when deploying AI systems in safety-critical applications.
翻译:理解人类和人工智能(AI)代理人的行动很重要,现代AI系统能够充分融入我们的日常生活之前,了解人类和人工智能(AI)代理人的行动非常重要。在本文中,我们表明,尽管基于深层次学习的AI系统目前取得了巨大成功,但它们很容易被微妙的对抗性噪音所愚弄,从而在互动情景中曲解行动的意图。根据对基于骨骼的人类互动的案例研究,我们提议对互动进行新的对抗性攻击,并演示如何用基于DNN的相互作用模型来欺骗人们以出乎意料的方式预测参与者的反应。从更广泛的角度看,我们拟议的攻击方法的范围不仅限于与骨骼数据有关的问题,而且还可以扩大到任何涉及连续回归的问题。我们的研究强调了与AI和人类互动循环中的潜在风险,在安全关键应用应用AI系统时,需要谨慎处理这些风险。