Advances in deep learning have enabled a wide range of promising applications. However, these systems are vulnerable to Adversarial Machine Learning (AML) attacks; adversarially crafted perturbations to their inputs could cause them to misclassify. Several state-of-the-art adversarial attacks have demonstrated that they can reliably fool classifiers making these attacks a significant threat. Adversarial attack generation algorithms focus primarily on creating successful examples while controlling the noise magnitude and distribution to make detection more difficult. The underlying assumption of these attacks is that the adversarial noise is generated offline, making their execution time a secondary consideration. However, recently, just-in-time adversarial attacks where an attacker opportunistically generates adversarial examples on the fly have been shown to be possible. This paper introduces a new problem: how do we generate adversarial noise under real-time constraints to support such real-time adversarial attacks? Understanding this problem improves our understanding of the threat these attacks pose to real-time systems and provides security evaluation benchmarks for future defenses. Therefore, we first conduct a run-time analysis of adversarial generation algorithms. Universal attacks produce a general attack offline, with no online overhead, and can be applied to any input; however, their success rate is limited because of their generality. In contrast, online algorithms, which work on a specific input, are computationally expensive, making them inappropriate for operation under time constraints. Thus, we propose ROOM, a novel Real-time Online-Offline attack construction Model where an offline component serves to warm up the online algorithm, making it possible to generate highly successful attacks under time constraints.
翻译:深层次的学习进步导致了一系列大有希望的应用程序。然而,这些系统很容易受到反对称机器学习(AML)攻击;但是,这些系统很容易受到反对称机器学习(AML)攻击;对其投入的对抗性干扰可能会导致它们分类错误。一些最先进的对抗性攻击表明,它们可以可靠地愚弄分类者,使这些攻击成为重大威胁。反对称攻击生成算法主要侧重于创造成功的例子,同时控制噪音规模和分发,使探测更加困难。这些攻击的基本假设是,对抗性噪音产生离线的声音,使得其执行时间成为次要考虑。然而,最近,刚刚发生的对称性对称性攻击可能会造成对立性攻击。最近,一个攻击者机会性对准性攻击的对立性攻击可能会导致对立性攻击。本文提出了一个新问题:我们如何在实时限制下产生对抗性噪音,支持这种实时对抗性攻击? 理解这一问题使我们更了解这些攻击对实时系统构成的威胁,并为未来防御提供安全性评估基准。因此,我们首先可以对在线对抗性攻击性攻击性攻击的对实时算法进行实时分析,在实时模型下进行实时分析。 普遍攻击性攻击的对内算,但一般的对内算算算算算算算算算算,因此总而言,对总的对内的对内部攻击性攻击的成功性攻击率是总的对内压性攻击性攻击性攻击性攻击性攻击性攻击性攻击率是有限的算。