Machine learning models are now widely deployed in real-world applications. However, the existence of adversarial examples has been long considered a real threat to such models. While numerous defenses aiming to improve the robustness have been proposed, many have been shown ineffective. As these vulnerabilities are still nowhere near being eliminated, we propose an alternative deployment-based defense paradigm that goes beyond the traditional white-box and black-box threat models. Instead of training a single partially-robust model, one could train a set of same-functionality, yet, adversarially-disjoint models with minimal in-between attack transferability. These models could then be randomly and individually deployed, such that accessing one of them minimally affects the others. Our experiments on CIFAR-10 and a wide range of attacks show that we achieve a significantly lower attack transferability across our disjoint models compared to a baseline of ensemble diversity. In addition, compared to an adversarially trained set, we achieve a higher average robust accuracy while maintaining the accuracy of clean examples.
翻译:现在,机器学习模式被广泛应用于现实世界的应用中。然而,长期以来,对立性实例的存在被认为是对此类模式的真正威胁。虽然提出了许多旨在改进强力的防御手段,但许多这些防御手段被证明是无效的。由于这些弱点还远远没有被消除,我们提议了一种替代的基于部署的防御模式,这种模式超越了传统的白箱和黑箱威胁模型。我们不训练单一的半硬体模型,而是训练一套功能相同的但对抗性分裂模式,在攻击性转移之间最小的相互对抗性模式。然后,这些模式可以随机和个别地部署,获得其中之一的防御手段对他人的影响最小。我们在CIFAR-10和一系列攻击的实验表明,与共同多样性的基线相比,我们在我们的脱节模式中实现的进攻性转移能力要低得多。此外,与经过对抗性训练的一套模型相比,我们取得了更高的平均稳健的准确度,同时保持了干净实例的准确性。