As data becomes increasingly vital, a company would be very cautious about releasing data, because the competitors could use it to train high-performance models, thereby posing a tremendous threat to the company's commercial competence. To prevent training good models on the data, we could add imperceptible perturbations to it. Since such perturbations aim at hurting the entire training process, they should reflect the vulnerability of DNN training, rather than that of a single model. Based on this new idea, we seek perturbed examples that are always unrecognized (never correctly classified) in training. In this paper, we uncover them by model checkpoints' gradients, forming the proposed self-ensemble protection (SEP), which is very effective because (1) learning on examples ignored during normal training tends to yield DNNs ignoring normal examples; (2) checkpoints' cross-model gradients are close to orthogonal, meaning that they are as diverse as DNNs with different architectures. That is, our amazing performance of ensemble only requires the computation of training one model. By extensive experiments with 9 baselines on 3 datasets and 5 architectures, SEP is verified to be a new state-of-the-art, e.g., our small $\ell_\infty=2/255$ perturbations reduce the accuracy of a CIFAR-10 ResNet18 from 94.56% to 14.68%, compared to 41.35% by the best-known method. Code is available at https://github.com/Sizhe-Chen/SEP.
翻译:随着数据变得日益重要,一个公司在公布数据方面会非常谨慎,因为竞争者可以使用它来培训高性能模型,从而对公司的商业能力构成巨大的威胁。为了防止在数据上培训好模型,我们可以增加不可察觉的扰动。由于这种扰动的目的是伤害整个培训过程,它们应该反映DNN培训的脆弱性,而不是单一模型的脆弱性。基于这一新的想法,我们寻求在培训中总是不被认可(从不正确的分类)的令人毛骨悚然的例子。在本文中,我们通过模范检查站梯度发现它们,形成拟议的自联式保护(SEP),这是非常有效的,因为(1) 学习在正常培训中被忽略的范例往往会使DNNNS忽略正常的例子;(2) 检查站的跨模范梯度接近于正反调,意味着它们与不同结构的DNNNS一样多样化。这就是,我们已知的组合的性表现只需要计算一个模型。通过在3个电子数据集和5个架构上的9个基线,SEP-RC的精确度,S-255-r-r-r-rx-rx-rx-rx-rx-rx-rass-rass-rass-rass-rx-rass-rx-rx-r_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx</s>