Certified defense methods against adversarial perturbations have been recently investigated in the black-box setting with a zeroth-order (ZO) perspective. However, these methods suffer from high model variance with low performance on high-dimensional datasets due to the ineffective design of the denoiser and are limited in their utilization of ZO techniques. To this end, we propose a certified ZO preprocessing technique for removing adversarial perturbations from the attacked image in the black-box setting using only model queries. We propose a robust UNet denoiser (RDUNet) that ensures the robustness of black-box models trained on high-dimensional datasets. We propose a novel black-box denoised smoothing (DS) defense mechanism, ZO-RUDS, by prepending our RDUNet to the black-box model, ensuring black-box defense. We further propose ZO-AE-RUDS in which RDUNet followed by autoencoder (AE) is prepended to the black-box model. We perform extensive experiments on four classification datasets, CIFAR-10, CIFAR-10, Tiny Imagenet, STL-10, and the MNIST dataset for image reconstruction tasks. Our proposed defense methods ZO-RUDS and ZO-AE-RUDS beat SOTA with a huge margin of $35\%$ and $9\%$, for low dimensional (CIFAR-10) and with a margin of $20.61\%$ and $23.51\%$ for high-dimensional (STL-10) datasets, respectively.
翻译:在黑盒情况下使用可靠的 UNet 去噪器进行认证零阶防御。最近针对对抗性干扰在零阶(ZO)黑盒情况下的认证防御方法一直在进行研究。然而,这些方法由于去噪器设计不良,在高维数据集上具有高模型方差和低性能,并且在使用 ZO 技术方面有限。因此,我们提出了一种认证的零阶预处理技术,仅使用模型查询即可从被攻击的图像中去除对抗性扰动。我们提出了一种坚固的 UNet 去噪器(RDUNet),确保在高维数据集上训练的黑盒模型的鲁棒性。我们提出了一种新的黑盒去噪平滑防御机制,ZO-RUDS,通过在黑盒模型之前插入 RDUNet 来确保黑盒防御。我们进一步提出了 ZO-AE-RUDS,即先传入 RDUNet,再传入自编码器(AE)至黑盒模型中。我们在四个分类数据集上进行了大量实验,即 CIFAR-10、CIFAR-10、Tiny Imagenet、STL-10 和 MNIST 数据集,用于图像重建任务。我们提出的防御方法 ZO-RUDS 和 ZO-AE-RUDS 在低维度(CIFAR-10)上领先 SOTA 高达 $35\%$,在高维度 (STL-10) 上领先 SOTA 高达 $20.61\%$ 和 $23.51\%$,超过对手一个巨大的边际。