Unlearnable examples (ULEs) aim to protect data from unauthorized usage for training DNNs. Existing work adds $\ell_\infty$-bounded perturbations to the original sample so that the trained model generalizes poorly. Such perturbations, however, are easy to eliminate by adversarial training and data augmentations. In this paper, we resolve this problem from a novel perspective by perturbing only one pixel in each image. Interestingly, such a small modification could effectively degrade model accuracy to almost an untrained counterpart. Moreover, our produced \emph{One-Pixel Shortcut (OPS)} could not be erased by adversarial training and strong augmentations. To generate OPS, we perturb in-class images at the same position to the same target value that could mostly and stably deviate from all the original images. Since such generation is only based on images, OPS needs significantly less computation cost than the previous methods using DNN generators. Based on OPS, we introduce an unlearnable dataset called CIFAR-10-S, which is indistinguishable from CIFAR-10 by humans but induces the trained model to extremely low accuracy. Even under adversarial training, a ResNet-18 trained on CIFAR-10-S has only 10.61% accuracy, compared to 83.02% by the existing error-minimizing method.
翻译:不可忽略的例子( ULES) 旨在保护数据不被未经授权地用于培训 DNNs 。 现有工作在原始样本中增加了 $\ ell <unk> infty $xinfty 受限制的扰动 。 原始样本中添加了 $\ ell <unk> infty 受约束的扰动 。 但是, 这样的扰动很容易通过对抗性培训和数据增强来消除 。 在本文中, 我们从新颖的角度解决这个问题, 在每个图像中只扰动一个像素。 有趣的是, 如此小的修改可以有效地将模型精度降低到几乎未经培训的对应方 。 此外, 我们制作的 \ emph{ One- Pixel 快捷键 (OPS) 无法通过对抗性培训和强大的增强来消除 。 为了生成 OPS, 我们很容易在相同的位置上插入与所有原始图像一样的目标值。 由于这种生成仅以图像为基础, OPS 需要比以前使用 DNNN 生成的方法要低得多的计算成本。 基于 OPS, 我们引入了一个不可忽略的数据集, 被称作 CIRFARFAR- 10- 10- S 10- S, 10- sreporting the in roducreduducredudududestred the des in des in viduducredudududuced the droductioned the dredal drial drial deminal droductiondrialdald the CD- drialdminaldaldminal</s>