Adversarial training (AT) with imperfect supervision is significant but receives limited attention. To push AT towards more practical scenarios, we explore a brand new yet challenging setting, i.e., AT with complementary labels (CLs), which specify a class that a data sample does not belong to. However, the direct combination of AT with existing methods for CLs results in consistent failure, but not on a simple baseline of two-stage training. In this paper, we further explore the phenomenon and identify the underlying challenges of AT with CLs as intractable adversarial optimization and low-quality adversarial examples. To address the above problems, we propose a new learning strategy using gradually informative attacks, which consists of two critical components: 1) Warm-up Attack (Warm-up) gently raises the adversarial perturbation budgets to ease the adversarial optimization with CLs; 2) Pseudo-Label Attack (PLA) incorporates the progressively informative model predictions into a corrected complementary loss. Extensive experiments are conducted to demonstrate the effectiveness of our method on a range of benchmarked datasets. The code is publicly available at: https://github.com/RoyalSkye/ATCL.
翻译:监督不完善的Aversari Adversari 培训(AT)很重要,但受到的关注有限。为了将AT推向更实际的情景,我们探索了一种崭新的但富有挑战性的品牌环境,即带有补充标签(CLs)的AT(AT),具体规定了数据样本不属于的类别。然而,AT(AT)与现有的CLs现有方法的直接结合导致持续失败,而不是两阶段培训的简单基线。在本文件中,我们进一步探索了这一现象,并查明了AT(AT)与CLs的CT(CLs)之间的内在挑战,作为棘手的对抗性优化和低质量的对抗性对抗性例子。为了解决上述问题,我们提出了一个使用渐进信息攻击的新学习战略,其中包括两个关键组成部分:1) 暖涨攻击(Warm-up) 温和地提高了对抗性扰动预算,以方便与CLs进行对抗性优化;2) Pseeudo-Label攻击(PLA) 将逐步增加的模型预测纳入纠正的补充性损失。我们进行了广泛的实验,以证明我们在一系列基准数据集上的方法的有效性。代码可以公开查阅: http://GLexhubus/Royal/Sky。