Machine learning is vulnerable to a wide variety of attacks. It is now well understood that by changing the underlying data distribution, an adversary can poison the model trained with it or introduce backdoors. In this paper we present a novel class of training-time attacks that require no changes to the underlying dataset or model architecture, but instead only change the order in which data are supplied to the model. In particular, we find that the attacker can either prevent the model from learning, or poison it to learn behaviours specified by the attacker. Furthermore, we find that even a single adversarially-ordered epoch can be enough to slow down model learning, or even to reset all of the learning progress. Indeed, the attacks presented here are not specific to the model or dataset, but rather target the stochastic nature of modern learning procedures. We extensively evaluate our attacks on computer vision and natural language benchmarks to find that the adversary can disrupt model training and even introduce backdoors.
翻译:机器学习很容易受到各种各样的攻击。 现在人们清楚地知道,通过改变基本数据分布,对手可以毒害经过它训练的模型,或者引入后门。 在本文中,我们提出了一个新型的培训时间攻击类别,不需要改变基本数据集或模型结构,而只是改变向模型提供数据的顺序。特别是,我们发现攻击者既可以阻止模型学习,也可以毒死它学习攻击者指定的行为。此外,我们发现,即使是一个对抗性有序的时代也足以减缓模型学习的速度,甚至可以重新设定所有学习进展。事实上,这里提出的攻击并不是模型或数据集所特有的,而是针对现代学习程序具有随机性的性质。我们广泛评价了计算机视觉攻击和自然语言基准,以便发现对手能够破坏模型培训甚至引入后门。