In supervised learning, it has been shown that label noise in the data can be interpolated without penalties on test accuracy under many circumstances. We show that interpolating label noise induces adversarial vulnerability, and prove the first theorem showing the dependence of label noise and adversarial risk in terms of the data distribution. Our results are almost sharp without accounting for the inductive bias of the learning algorithm. We also show that inductive bias makes the effect of label noise much stronger.
翻译:在受监督的学习中,人们发现,数据中的标签噪音可以被内插,而在许多情形下不因测试准确性而受到惩罚。我们发现,内插标签噪音会引起对抗性脆弱性,并证明第一个理论表明标签噪音和对抗性风险在数据分布方面的依赖性。我们的结果几乎是尖锐的,没有考虑到学习算法的感性偏差。我们还表明,感性偏见使标签噪音的效果更加强烈。