Randomized smoothing (RS) has been shown to be a fast, scalable technique for certifying the robustness of deep neural network classifiers. However, methods based on RS require augmenting data with large amounts of noise, which leads to significant drops in accuracy. We propose a training-free, modified smoothing approach, Smooth-Reduce, that leverages patching and aggregation to provide improved classifier certificates. Our algorithm classifies overlapping patches extracted from an input image, and aggregates the predicted logits to certify a larger radius around the input. We study two aggregation schemes -- max and mean -- and show that both approaches provide better certificates in terms of certified accuracy, average certified radii and abstention rates as compared to concurrent approaches. We also provide theoretical guarantees for such certificates, and empirically show significant improvements over other randomized smoothing methods that require expensive retraining. Further, we extend our approach to videos and provide meaningful certificates for video classifiers. A project page can be found at https://nyu-dice-lab.github.io/SmoothReduce/
翻译:事实证明,基于RS的方法需要大量噪音的增强数据,从而导致准确性大幅下降。我们提议采用无培训、经修改的平滑方法,即平滑式平滑法,利用补丁和汇总法来提供经改进的分类证书。我们的算法对从输入图像中提取的重叠补丁进行分类,并对预测的日志进行汇总,以验证输入周围的更大半径。我们研究了两种组合办法 -- -- 最大和平均 -- -- 并表明这两种办法都提供了更好的认证准确性证书、平均经认证的半径和弃权率,与并行办法相比。我们还为这些证书提供理论保证,并从经验上表明比其他需要昂贵再培训的随机化的平滑方法有了重大改进。此外,我们还将我们的方法扩大到视频,并为视频分类人员提供有意义的证书。项目网页见https://nyu-dice-lab.github.io/SmoothReduce/