Randomized smoothing is a recent technique that achieves state-of-art performance in training certifiably robust deep neural networks. While the smoothing family of distributions is often connected to the choice of the norm used for certification, the parameters of these distributions are always set as global hyper parameters independent from the input data on which a network is certified. In this work, we revisit Gaussian randomized smoothing and show that the variance of the Gaussian distribution can be optimized at each input so as to maximize the certification radius for the construction of the smooth classifier. We also propose a simple memory-based approach to certifying the resultant smooth classifier. This new approach is generic, parameter-free, and easy to implement. In fact, we show that our data dependent framework can be seamlessly incorporated into 3 randomized smoothing approaches, leading to consistent improved certified accuracy. When this framework is used in the training routine of these approaches followed by a data dependent certification, we achieve 9% and 6% improvement over the certified accuracy of the strongest baseline for a radius of 0.5 on CIFAR10 and ImageNet.
翻译:随机平滑是最近的一种技术,在培训中取得最先进的性能,可以证实是稳健的深神经网络。虽然分布的平滑型通常与认证标准的选择相关,但这些分布的参数总是被设定为独立于一个网络所认证的投入数据的全球超常参数。在这项工作中,我们重新审视高斯随机平滑,并表明可优化每个输入的高斯分布差异,以最大限度地实现构建平稳分类器的认证半径。我们还提议了一种简单的基于记忆的认证方法,以认证结果的平稳分类器。这一新方法是通用的、无参数的,易于执行。事实上,我们表明,我们的数据依赖框架可以无缝地纳入三个随机的平滑方法,导致不断提高认证的准确性。当在使用这一框架进行这些方法的培训常规时,并随后进行数据依赖认证,我们比CRFAR10和图象网半径0.5的经认证的最强基线的准确度提高了9%和6%。