An adversarial patch can arbitrarily manipulate image pixels within a restricted region to induce model misclassification. The threat of this localized attack has gained significant attention because the adversary can mount a physically-realizable attack by attaching patches to the victim object. Recent provably robust defenses generally follow the PatchGuard framework by using CNNs with small receptive fields and secure feature aggregation for robust model predictions. In this paper, we extend PatchGuard to PatchGuard++ for provably detecting the adversarial patch attack to boost both provable robust accuracy and clean accuracy. In PatchGuard++, we first use a CNN with small receptive fields for feature extraction so that the number of features corrupted by the adversarial patch is bounded. Next, we apply masks in the feature space and evaluate predictions on all possible masked feature maps. Finally, we extract a pattern from all masked predictions to catch the adversarial patch attack. We evaluate PatchGuard++ on ImageNette (a 10-class subset of ImageNet), ImageNet, and CIFAR-10 and demonstrate that PatchGuard++ significantly improves the provable robustness and clean performance.
翻译:对抗性补丁可以任意操纵限制区域内的图像像素, 以诱发模型错误分类。 这种局部攻击的威胁引起了人们的极大注意, 因为对手可以通过给受害者物体附加补丁, 发动一个可以实际实现的攻击。 最近的强势防御系统通常遵循 PatchGuard 框架, 使用CNN 使用小型可接收字段和安全的特征聚合进行稳健的模型预测。 在本文中, 我们将PatchGuard 推广到 PatchGuard++, 以可辨别对抗性攻击, 以提高可辨识的稳健精确度和清洁准确性。 在 PatchGuard++ 中, 我们首先使用一个带有小型可识别域的CNN, 进行特征提取, 以便把被对抗性格网损坏的特征捆绑起来。 下一步, 我们在功能空间中应用遮掩罩, 评估所有可能的遮蔽特征图的预测。 最后, 我们从所有蒙蔽的预测中提取一个模式, 以捕捉到对抗性贴网攻击。 我们从图像网络上评价 PatchGuard++ (图像网的10级子), 图像网 和CIFAR 10 10 显示PatchGuard++ 大大改进了可探测的坚固性和清洁性。