Machine learning based data-driven technologies have shown impressive performances in a variety of application domains. Most enterprises use data from multiple sources to provide quality applications. The reliability of the external data sources raises concerns for the security of the machine learning techniques adopted. An attacker can tamper the training or test datasets to subvert the predictions of models generated by these techniques. Data poisoning is one such attack wherein the attacker tries to degrade the performance of a classifier by manipulating the training data. In this work, we focus on label contamination attack in which an attacker poisons the labels of data to compromise the functionality of the system. We develop Gradient-based Data Subversion strategies to achieve model degradation under the assumption that the attacker has limited-knowledge of the victim model. We exploit the gradients of a differentiable convex loss function (residual errors) with respect to the predicted label as a warm-start and formulate different strategies to find a set of data instances to contaminate. Further, we analyze the transferability of attacks and the susceptibility of binary classifiers. Our experiments show that the proposed approach outperforms the baselines and is computationally efficient.
翻译:大多数企业使用多种来源的数据提供高质量的应用。外部数据源的可靠性使人们对所采用的机器学习技术的安全产生担忧。攻击者可以篡改培训或测试数据集,以破坏这些技术产生的模型的预测。数据中毒就是攻击者试图通过操纵培训数据来降低分类者性能的一次攻击。在这项工作中,我们侧重于标签污染攻击,攻击者毒害数据标签以损害系统功能。我们开发了基于梯度的数据子转换战略,以在攻击者对受害者模型的了解有限的情况下实现模型退化。我们利用了不同可变同源体损失功能的梯度(误差),将预测的标签作为热源,并制定了不同的战略以寻找一组污染数据实例。此外,我们分析了攻击者的可转移性以及二进制分类器的易感性。我们的实验显示,拟议的方法超过了基线,并且具有计算效率。