Randomized smoothing is currently the state-of-the-art method that provides certified robustness for deep neural networks. However, it often cannot achieve an adequate certified region on real-world datasets. One way to obtain a larger certified region is to use an input-specific algorithm instead of using a fixed Gaussian filter for all data points. Several methods based on this idea have been proposed, but they either suffer from high computational costs or gain marginal improvement in certified radius. In this work, we show that by exploiting the quasiconvex problem structure, we can find the optimal certified radii for most data points with slight computational overhead. This observation leads to an efficient and effective input-specific randomized smoothing algorithm. We conduct extensive experiments and empirical analysis on Cifar10 and ImageNet. The results show that the proposed method significantly enhances the certified radii with low computational overhead.
翻译:随机平滑是目前最先进的方法,它为深神经网络提供了经认证的稳健性。然而,它往往无法在真实世界的数据集上实现一个经认证的适当区域。获得一个更大的经认证区域的方法之一是对所有数据点使用一种特定输入的算法,而不是使用固定的高斯过滤器。基于这一想法提出了几种方法,但它们要么存在高计算成本,要么在经认证的半径上得到边际改进。在这项工作中,我们发现,通过利用准convex问题结构,我们可以找到具有轻微计算间接费用的大多数数据点的最佳经认证的半径。这一观察导致一种高效和有效的特定输入的随机平滑算法。我们在Cifar10和图像网络上进行了广泛的实验和实验分析。结果显示,拟议的方法大大加强了经认证的低计算间接费用的半径。