In a \emph{data poisoning attack}, an attacker modifies, deletes, and/or inserts some training examples to corrupt the learnt machine learning model. \emph{Bootstrap Aggregating (bagging)} is a well-known ensemble learning method, which trains multiple base models on random subsamples of a training dataset using a base learning algorithm and uses majority vote to predict labels of testing examples. We prove the intrinsic certified robustness of bagging against data poisoning attacks. Specifically, we show that bagging with an arbitrary base learning algorithm provably predicts the same label for a testing example when the number of modified, deleted, and/or inserted training examples is bounded by a threshold. Moreover, we show that our derived threshold is tight if no assumptions on the base learning algorithm are made. We evaluate our method on MNIST and CIFAR10. For instance, our method achieves a certified accuracy of $91.1\%$ on MNIST when arbitrarily modifying, deleting, and/or inserting 100 training examples.
翻译:在 \ emph{ 数据中毒攻击} 中,攻击者修改、删除和(或)插入一些训练范例,以腐蚀所学的机器学习模式。 \ emph{Bootstrap 聚合(bushing)} 是一种众所周知的混合学习方法,它用一种基础学习算法来对培训数据集随机子样本进行多基模型培训,并使用多数票来预测测试实例的标签。 我们证明,包装袋在防止数据中毒攻击上具有内在的经证明的可靠性。 具体地说,当修改、删除和(或)插入的培训示例被一个阈值捆绑在一起时,我们用任意的基础学习算法预示一个测试示例的同一标签。 此外,我们显示,如果在基础学习算法上没有作出假设,那么我们得出的阈值就很紧。 我们用MNIST和CIFAR10 来评估我们的计算方法,例如,我们的方法在任意修改、 删除和(或) 插入100个培训示例时,在MNIST 上达到了经核证的911 $910 美元。