Meta-learning enables a model to learn from very limited data to undertake a new task. In this paper, we study the general meta-learning with adversarial samples. We present a meta-learning algorithm, ADML (ADversarial Meta-Learner), which leverages clean and adversarial samples to optimize the initialization of a learning model in an adversarial manner. ADML leads to the following desirable properties: 1) it turns out to be very effective even in the cases with only clean samples; 2) it is model-agnostic, i.e., it is compatible with any learning model that can be trained with gradient descent; and most importantly, 3) it is robust to adversarial samples, i.e., unlike other meta-learning methods, it only leads to a minor performance degradation when there are adversarial samples. We show via extensive experiments that ADML delivers the state-of-the-art performance on two widely-used image datasets, MiniImageNet and CIFAR100, in terms of both accuracy and robustness.
翻译:元化学习使模型能够从非常有限的数据中学习,以承担新的任务。在本文中,我们研究用对抗性样本进行的一般元学习。我们提出了一个元学习算法,即ADML(ADversarial Meta-Learner),它利用清洁和对抗性样本,以对抗性方式优化学习模式的初始化。ADML导致以下可取的属性:1)它证明即使在只有干净样本的情况下也非常有效;2)它是模型-不可知性,也就是说,它与任何能够用梯度下降来训练的学习模型相兼容;以及最重要的是,3)它与其他对抗性样本不同,它只会在有对抗性样本时导致轻微的性能退化。我们通过广泛的实验表明,ADMLML在两个广泛使用的图像数据集(MiniImageNet和CIFAR100)上,在准确性和稳健性方面,它提供了最先进的性能。