Indiscriminate data poisoning attacks aim to decrease a model's test accuracy by injecting a small amount of corrupted training data. Despite significant interest, existing attacks remain relatively ineffective against modern machine learning (ML) architectures. In this work, we introduce the notion of model poisonability as a technical tool to explore the intrinsic limits of data poisoning attacks. We derive an easily computable threshold to establish and quantify a surprising phase transition phenomenon among popular ML models: data poisoning attacks become effective only when the poisoning ratio exceeds our threshold. Building on existing parameter corruption attacks and refining the Gradient Canceling attack, we perform extensive experiments to confirm our theoretical findings, test the predictability of our transition threshold, and significantly improve existing data poisoning baselines over a range of datasets and models. Our work highlights the critical role played by the poisoning ratio, and sheds new insights on existing empirical results, attacks and mitigation strategies in data poisoning.
翻译:不加区分的数据中毒袭击旨在通过注入少量腐败培训数据来降低模型测试的准确性。尽管人们对现代机器学习架构的兴趣很大,但现有的袭击相对而言仍然相对无效。在这项工作中,我们引入了模型毒性概念,作为探索数据中毒袭击内在限度的技术工具。我们得出了一个容易计算的基本值,以便在流行的ML模型中确立和量化一个令人惊讶的阶段过渡现象:数据中毒袭击只有在中毒比率超过我们的临界值时才生效。我们在现有参数腐败袭击和完善逐步取消袭击的基础上,进行了广泛的实验,以证实我们的理论结论,测试我们过渡门槛的可预测性,并大大改进一系列数据集和模型的现有数据中毒基线。我们的工作强调了中毒比率所起的关键作用,并对数据中毒的现有实验结果、袭击和缓解战略提出了新的见解。</s>