Existing adversarial learning methods for enhancing the robustness of deep neural networks assume the availability of a large amount of data from which we can generate adversarial examples. However, in an adversarial meta-learning setting, the model needs to train with only a few adversarial examples to learn a robust model for unseen tasks, which is a very difficult goal to achieve. Further, learning transferable robust representations for unseen domains is a difficult problem even with a large amount of data. To tackle such a challenge, we propose a novel adversarial self-supervised meta-learning framework with bilevel attacks which aims to learn robust representations that can generalize across tasks and domains. Specifically, in the inner loop, we update the parameters of the given encoder by taking inner gradient steps using two different sets of augmented samples, and generate adversarial examples for each view by maximizing the instance classification loss. Then, in the outer loop, we meta-learn the encoder parameter to maximize the agreement between the two adversarial examples, which enables it to learn robust representations. We experimentally validate the effectiveness of our approach on unseen domain adaptation tasks, on which it achieves impressive performance. Specifically, our method significantly outperforms the state-of-the-art meta-adversarial learning methods on few-shot learning tasks, as well as self-supervised learning baselines in standard learning settings with large-scale datasets.
翻译:加强深神经网络稳健性的现有对抗性学习方法假设有大量数据可供我们从中产生对抗性实例,然而,在对抗性元学习环境中,模型只需要用几个对抗性实例来训练,以学习一个强有力的无形任务模式,这是一个非常难以实现的目标。此外,学习可转移的、可转移的对隐蔽域的强势表述是一个困难问题,即使有大量数据也是如此。为了应对这一挑战,我们提议了一个具有双级攻击的新型对抗性自我监督元学习框架,目的是学习能够贯穿不同任务和领域的强有力表述。具体地说,在内部循环中,我们通过使用两套不同的强化样本采取内部梯度步骤来更新给特定编码的参数,并通过最大限度地增加实例分类损失来生成每种观点的对抗性范例。然后,在外循环中,我们将两个对立性自我监督的参数读取,以最大限度地实现两个对立性实例之间的一致,从而使其能够学习强性陈述。我们实验性地验证了我们关于隐性域适应任务的方法的有效性,在内部循环中,我们通过采用两种不同的梯度步骤更新了给人以令人印象深刻的学习。具体地,我们的标准学习了标准式的模型,在大规模学习了我们的标准上,具体地将方法比标准式学习了。