Model robustness is vital for the reliable deployment of machine learning models in real-world applications. Recent studies have shown that data augmentation can result in model over-relying on features in the low-frequency domain, sacrificing performance against low-frequency corruptions, highlighting a connection between frequency and robustness. Here, we take one step further to more directly study the frequency bias of a model through the lens of its Jacobians and its implication to model robustness. To achieve this, we propose Jacobian frequency regularization for models' Jacobians to have a larger ratio of low-frequency components. Through experiments on four image datasets, we show that biasing classifiers towards low (high)-frequency components can bring performance gain against high (low)-frequency corruption and adversarial perturbation, albeit with a tradeoff in performance for low (high)-frequency corruption. Our approach elucidates a more direct connection between the frequency bias and robustness of deep learning models.
翻译:模型的稳健性对于在现实世界应用中可靠地部署机器学习模型至关重要。 最近的研究显示,数据扩增可能导致模型过度依赖低频域的特征,牺牲低频域的绩效以对抗低频腐败,突出频率和稳健性之间的联系。这里,我们更进一步更直接地研究模型的频率偏差,从叶科比人的角度研究模型的频率偏差及其对模型稳健性的影响。为了实现这一点,我们建议雅各比人频率规范化模型的雅各比人具有更大的低频元件比例。通过对四个图像数据集的实验,我们表明,将分类方法偏向低(高)频元元件,可以防止高(低频)腐败和对抗性扰动,尽管在低(高)频腐败的绩效方面有所权衡。我们的方法阐明了深度学习模型的频率偏差和稳健性之间的更直接的联系。