Meta-learning model can quickly adapt to new tasks using few-shot labeled data. However, despite achieving good generalization on few-shot classification tasks, it is still challenging to improve the adversarial robustness of the meta-learning model in few-shot learning. Although adversarial training (AT) methods such as Adversarial Query (AQ) can improve the adversarially robust performance of meta-learning models, AT is still computationally expensive training. On the other hand, meta-learning models trained with AT will drop significant accuracy on the original clean images. This paper proposed a meta-learning method on the adversarially robust neural network called Long-term Cross Adversarial Training (LCAT). LCAT will update meta-learning model parameters cross along the natural and adversarial sample distribution direction with long-term to improve both adversarial and clean few-shot classification accuracy. Due to cross-adversarial training, LCAT only needs half of the adversarial training epoch than AQ, resulting in a low adversarial training computation. Experiment results show that LCAT achieves superior performance both on the clean and adversarial few-shot classification accuracy than SOTA adversarial training methods for meta-learning models.
翻译:利用少量标签数据,元学习模式可以迅速适应新的任务,然而,尽管在少数分类任务上取得了良好的概括性,但在少数学习过程中,改进元学习模式的对抗性强强仍是一项挑战,尽管AQ(AQ)等对抗性培训(AT)方法可以提高元学习模式的对抗性强效,但AT仍然是计算上昂贵的培训。另一方面,在AT(AT)培训的元学习模式将大大降低原始清洁图像的准确性。本文提议了在对抗性强神经网络上采用称为长期跨反versarial培训(LCAT)的元学习方法。LCAT将沿自然和对抗性抽样分配方向更新元学习模式参数,长期提高对抗性和干净的几发分类准确性。由于交叉对抗性培训,LCAT仅需要超过AQ(AQ)的半数对抗性培训,从而导致低的对抗性培训计算。实验结果表明,LCAT在清洁和对抗性少数对立性培训的分类方法方面都取得了优异于SOTA的顶性培训方法。