We find that images contain intrinsic structure that enables the reversal of many adversarial attacks. Attack vectors cause not only image classifiers to fail, but also collaterally disrupt incidental structure in the image. We demonstrate that modifying the attacked image to restore the natural structure will reverse many types of attacks, providing a defense. Experiments demonstrate significantly improved robustness for several state-of-the-art models across the CIFAR-10, CIFAR-100, SVHN, and ImageNet datasets. Our results show that our defense is still effective even if the attacker is aware of the defense mechanism. Since our defense is deployed during inference instead of training, it is compatible with pre-trained networks as well as most other defenses. Our results suggest deep networks are vulnerable to adversarial examples partly because their representations do not enforce the natural structure of images.
翻译:我们发现图像含有能够逆转许多对抗性攻击的内在结构。 攻击矢量不仅导致图像分类失败,而且附带干扰图像中的附带结构。 我们证明,修改被攻击图像以恢复自然结构将扭转许多类型的攻击,提供防御。 实验表明,在CIFAR-10、CIFAR-100、SVHN和图像网络数据集中,一些最先进的模型的坚固性显著提高。 我们的结果表明,即使攻击者知道防御机制,我们的防御依然有效。 由于我们的防御是在推断而不是训练期间部署的,因此它与预先训练的网络以及大多数其他防御系统相容。 我们的结果表明,深层网络很容易受到对抗性例子的影响,部分原因是它们的表现没有强制执行图像的自然结构。