Adversarial examples mainly exploit changes to input pixels to which humans are not sensitive to, and arise from the fact that models make decisions based on uninterpretable features. Interestingly, cognitive science reports that the process of interpretability for human classification decision relies predominantly on low spatial frequency components. In this paper, we investigate the robustness to adversarial perturbations of models enforced during training to leverage information corresponding to different spatial frequency ranges. We show that it is tightly linked to the spatial frequency characteristics of the data at stake. Indeed, depending on the data set, the same constraint may results in very different level of robustness (up to 0.41 adversarial accuracy difference). To explain this phenomenon, we conduct several experiments to enlighten influential factors such as the level of sensitivity to high frequencies, and the transferability of adversarial perturbations between original and low-pass filtered inputs.
翻译:反versarial 实例主要利用人类不敏感的输入像素的变化,这种变化产生于模型根据无法解释的特征作出决定的事实。有趣的是,认知科学报告说,人类分类决定的解释过程主要依靠低空间频率组成部分。在本文中,我们调查了培训期间对模型进行对抗性干扰的稳健性,以利用与不同空间频率范围相适应的信息。我们表明,它与所涉数据的空间频度特征密切相关。事实上,根据数据集,同样的制约可能导致非常不同的稳健度(高达0.41对称精确度差异 ) 。为了解释这一现象,我们进行了几项实验,以揭示对高频率的敏感度和原始与低空过滤输入之间的对抗性扰动性等有影响的因素。