Neural networks (NNs) are known to be vulnerable against adversarial perturbations, and thus there is a line of work aiming to provide robustness certification for NNs, such as randomized smoothing, which samples smoothing noises from a certain distribution to certify the robustness for a smoothed classifier. However, as shown by previous work, the certified robust radius in randomized smoothing suffers from scaling to large datasets ("curse of dimensionality"). To overcome this hurdle, we propose a Double Sampling Randomized Smoothing (DSRS) framework, which exploits the sampled probability from an additional smoothing distribution to tighten the robustness certification of the previous smoothed classifier. Theoretically, under mild assumptions, we prove that DSRS can certify $\Theta(\sqrt d)$ robust radius under $\ell_2$ norm where $d$ is the input dimension, implying that DSRS may be able to break the curse of dimensionality of randomized smoothing. We instantiate DSRS for a generalized family of Gaussian smoothing and propose an efficient and sound computing method based on customized dual optimization considering sampling error. Extensive experiments on MNIST, CIFAR-10, and ImageNet verify our theory and show that DSRS certifies larger robust radii than existing baselines consistently under different settings. Code is available at https://github.com/llylly/DSRS.
翻译:众所周知,神经网络(NNS)在对抗性扰动面前很脆弱,因此,有一系列工作旨在为NNS提供稳健性认证,例如随机平滑,从某种分布中抽取平滑的噪音,以证明平滑的分类器的稳健性。然而,如以往工作所示,随机平滑过程中经核证的稳健半径因缩放成大型数据集(“维度的诅咒”)而受到影响。为了克服这一障碍,我们提议了一个双层抽样的随机随机调整平滑(DSS)框架,利用额外平滑分布的抽样概率,以收紧先前平滑的叙利器的稳健性认证。理论上,在轻度假设下,我们证明DSRS在$@ell_2美元标准下可以核证$(sqrt d)稳健的半径,这意味着DSRS也许能够打破随机平滑的维度的诅咒。 我们用双层SRSS来考虑一个比高音平滑/SMAR的常规化大家庭,提出一个高效和稳妥的模型,在现有的DRMR标准下,根据不断的定制化的模型,在现有的DRSBSB/MR校验制模型上展示。