Deep neural networks (DNNs) are vulnerable to "backdoor" poisoning attacks, in which an adversary implants a secret trigger into an otherwise normally functioning model. Detection of backdoors in trained models without access to the training data or example triggers is an important open problem. In this paper, we identify an interesting property of these models: adversarial perturbations transfer from image to image more readily in poisoned models than in clean models. This holds for a variety of model and trigger types, including triggers that are not linearly separable from clean data. We use this feature to detect poisoned models in the TrojAI benchmark, as well as additional models.
翻译:深神经网络(DNN)很容易受到“后门”中毒袭击,其中敌方将秘密触发器植入原本正常运行的模式中。在没有获得培训数据或实例触发器的情况下在经过训练的模型中探测后门是一个重要的未决问题。在本文中,我们确定了这些模型的一个有趣的属性:在有毒模型中,从图像向图像的对抗干扰比在清洁模型中更容易地转移到图像。这有利于各种模型和触发类型,包括无法从清洁数据中线性分离的触发器。我们利用这一特征在TrojAI基准中检测有毒模型,以及其它模型。