Machine learning techniques have been widely applied to various applications. However, they are potentially vulnerable to data poisoning attacks, where sophisticated attackers can disrupt the learning procedure by injecting a fraction of malicious samples into the training dataset. Existing defense techniques against poisoning attacks are largely attack-specific: they are designed for one specific type of attacks but do not work for other types, mainly due to the distinct principles they follow. Yet few general defense strategies have been developed. In this paper, we propose De-Pois, an attack-agnostic defense against poisoning attacks. The key idea of De-Pois is to train a mimic model the purpose of which is to imitate the behavior of the target model trained by clean samples. We take advantage of Generative Adversarial Networks (GANs) to facilitate informative training data augmentation as well as the mimic model construction. By comparing the prediction differences between the mimic model and the target model, De-Pois is thus able to distinguish the poisoned samples from clean ones, without explicit knowledge of any ML algorithms or types of poisoning attacks. We implement four types of poisoning attacks and evaluate De-Pois with five typical defense methods on different realistic datasets. The results demonstrate that De-Pois is effective and efficient for detecting poisoned data against all the four types of poisoning attacks, with both the accuracy and F1-score over 0.9 on average.
翻译:机器学习技术被广泛应用到各种应用中。然而,它们可能很容易被数据中毒攻击,尖端攻击者可以通过将部分恶意样本输入培训数据集来破坏学习程序。现有的防毒攻击防御技术基本上是针对攻击的:它们针对一种特定攻击设计,但不适用于其他类型,主要由于它们遵循的不同原则。然而,没有制定多少一般性防御战略。在本文件中,我们提议De-Pois,对中毒攻击进行攻击性-认知性防御。De-Pois的关键思想是训练模拟模型,目的是模仿清洁样品所训练的目标模型的行为。我们利用Genement Aversarial网络(GANs)促进信息化培训数据增强以及模拟模型的构建。通过比较模拟模型和目标模型之间的预测差异,De-Pois因此能够将中毒样品与清洁样品区分开来,而没有明确了解任何ML算法或中毒攻击类型。我们用四种类型的中毒攻击类型来模仿目标模型,目的是模仿清洁样品,我们用五种典型的防御方法来评估De-Poisa攻击行为的行为。