Delusive poisoning is a special kind of attack to obstruct learning, where the learning performance could be significantly deteriorated by only manipulating (even slightly) the features of correctly labeled training examples. By formalizing this malicious attack as finding the worst-case distribution shift at training time within a specific $\infty$-Wasserstein ball, we show that minimizing adversarial risk on the poison data is equivalent to optimizing an upper bound of natural risk on the original data. This implies that adversarial training can be a principled defense method against delusive poisoning. To further understand the internal mechanism of the defense, we disclose that adversarial training can resist the training distribution shift by preventing the learner from overly relying on non-robust features in a natural setting. Finally, we complement our theoretical findings with a set of experiments on popular benchmark datasets, which shows that the defense withstands six different practical attacks. Both theoretical and empirical results vote for adversarial training when confronted with delusive poisoning.
翻译:故意中毒是阻碍学习的一种特殊攻击,因为只有操纵(甚至轻微地)正确标签的培训实例,学习表现才能显著恶化。通过将这种恶意攻击正规化,在培训时间找到在特定美元-Wasserstein球中最坏情况的分配变化,我们表明,将毒物数据的对抗性风险降到最低程度,相当于优化原始数据自然风险的上限。这意味着对抗性训练可以是防御消毒的有原则的防御方法。为了进一步理解国防的内部机制,我们透露,对抗性训练可以通过防止学习者过度依赖自然环境中的非野蛮特征来抵制培训分配变化。最后,我们用一套关于流行基准数据集的实验来补充我们的理论结论,这表明防御性能承受六种不同的实际攻击。在面对消毒中毒时,对防御性训练进行理论和经验结果投票。