Machine learning is vulnerable to a wide variety of different attacks. It is now well understood that by changing the underlying data distribution, an adversary can poison the model trained with it or introduce backdoors. In this paper we present a novel class of training-time attacks that require no changes to the underlying model dataset or architecture, but instead only change the order in which data are supplied to the model. In particular, an attacker can disrupt the integrity and availability of a model by simply reordering training batches, with no knowledge about either the model or the dataset. Indeed, the attacks presented here are not specific to the model or dataset, but rather target the stochastic nature of modern learning procedures. We extensively evaluate our attacks to find that the adversary can disrupt model training and even introduce backdoors. For integrity we find that the attacker can either stop the model from learning, or poison it to learn behaviours specified by the attacker. For availability we find that a single adversarially-ordered epoch can be enough to slow down model learning, or even to reset all of the learning progress. Such attacks have a long-term impact in that they decrease model performance hundreds of epochs after the attack took place. Reordering is a very powerful adversarial paradigm in that it removes the assumption that an adversary must inject adversarial data points or perturbations to perform training-time attacks. It reminds us that stochastic gradient descent relies on the assumption that data are sampled at random. If this randomness is compromised, then all bets are off.
翻译:机器学习很容易受到各种各样的不同攻击。 现在人们可以清楚地理解, 通过改变基本数据分布, 对手可以毒害经过它训练的模型, 或者引入后门。 在本文中, 我们展示了一种新的培训时间攻击类别, 不需要改变基本模型数据集或结构, 但只能改变向模型提供数据的顺序。 特别是, 攻击者可以通过简单的重新排序培训批次来破坏模型的完整性和可用性, 并且对模型或数据集一无所知。 事实上, 这里提出的攻击不是针对模型或数据集的, 而是针对现代学习程序的随机性。 我们广泛评估了我们的攻击, 以发现敌人可以破坏模型培训甚至引入后门。 为了完整性, 我们发现攻击者可以阻止模型的学习, 或者毒死它学习攻击者指定的行为。 我们发现, 单调的随机性精度足以减缓模型学习速度, 甚至重新设定所有学习进度。 这种攻击具有长期性的影响, 这样的攻击可以破坏模型训练的精确度 。 这种模型的精确度是模型的精确度, 它的精确度在模型中可以降低模型的精确度 。