In this paper we investigate the frequency sensitivity of Deep Neural Networks (DNNs) when presented with clean samples versus poisoned samples. Our analysis shows significant disparities in frequency sensitivity between these two types of samples. Building on these findings, we propose FREAK, a frequency-based poisoned sample detection algorithm that is simple yet effective. Our experimental results demonstrate the efficacy of FREAK not only against frequency backdoor attacks but also against some spatial attacks. Our work is just the first step in leveraging these insights. We believe that our analysis and proposed defense mechanism will provide a foundation for future research and development of backdoor defenses.
翻译:在本文中,我们研究了Deep Neural Networks(DNNs)对干净样本与毒样本频率敏感性的影响。我们的分析表明,这两种样本之间的频率敏感性存在显著差异。在此基础上,我们提出了FREAK,一种简单且有效的基于频率的毒样本检测算法。我们的实验结果不仅证明了FREAK对频率后门攻击的有效性,而且对一些空间攻击也具有有效性。我们的工作只是利用这些见解的第一步。我们相信我们的分析和提出的防御机制将为未来的后门防御的研究和开发提供基础。