Robust Model-Agnostic Meta-Learning (MAML) is usually adopted to train a meta-model which may fast adapt to novel classes with only a few exemplars and meanwhile remain robust to adversarial attacks. The conventional solution for robust MAML is to introduce robustness-promoting regularization during meta-training stage. With such a regularization, previous robust MAML methods simply follow the typical MAML practice that the number of training shots should match with the number of test shots to achieve an optimal adaptation performance. However, although the robustness can be largely improved, previous methods sacrifice clean accuracy a lot. In this paper, we observe that introducing robustness-promoting regularization into MAML reduces the intrinsic dimension of clean sample features, which results in a lower capacity of clean representations. This may explain why the clean accuracy of previous robust MAML methods drops severely. Based on this observation, we propose a simple strategy, i.e., increasing the number of training shots, to mitigate the loss of intrinsic dimension caused by robustness-promoting regularization. Though simple, our method remarkably improves the clean accuracy of MAML without much loss of robustness, producing a robust yet accurate model. Extensive experiments demonstrate that our method outperforms prior arts in achieving a better trade-off between accuracy and robustness. Besides, we observe that our method is less sensitive to the number of fine-tuning steps during meta-training, which allows for a reduced number of fine-tuning steps to improve training efficiency.
翻译:常规解决方案是,强健的MAML在元培训阶段引入强力促进正规化的常规解决方案。随着这种正规化,以往稳健的MAML方法只是遵循了典型的MAML做法,即训练镜头数量应当与测试镜头数量匹配,以实现最佳适应性能。然而,尽管稳健性可以大大改进,但以往的方法会牺牲很多清洁性。在本文件中,我们发现,在MAML引入强力促进正规化的新方法会减少清洁抽样特征的内在层面,从而导致清洁的展示能力下降。这可能解释为何以往稳健的MAML方法的准确性会大幅下降。基于这一观察,我们提出了一个简单的战略,即增加培训镜头的数量,以减少因稳健性促进正规化而造成的内在层面损失。尽管我们的方法很简单,我们的方法明显地改进了MAML的精度,而没有大大降低我们稳健性测试方法的准确性,我们在以往的精度上,我们更精确的测试方法中,更准确地展示了我们之前的精度。