Recent studies show that models trained by continual learning can achieve the comparable performances as the standard supervised learning and the learning flexibility of continual learning models enables their wide applications in the real world. Deep learning models, however, are shown to be vulnerable to adversarial attacks. Though there are many studies on the model robustness in the context of standard supervised learning, protecting continual learning from adversarial attacks has not yet been investigated. To fill in this research gap, we are the first to study adversarial robustness in continual learning and propose a novel method called \textbf{T}ask-\textbf{A}ware \textbf{B}oundary \textbf{A}ugmentation (TABA) to boost the robustness of continual learning models. With extensive experiments on CIFAR-10 and CIFAR-100, we show the efficacy of adversarial training and TABA in defending adversarial attacks.
翻译:在连续学习中迈向对抗鲁棒性
最近的研究表明,通过连续学习训练的模型可以达到与标准监督学习相当的性能,而连续学习模型的学习灵活性使它们在现实世界中得到了广泛的应用。然而,深度学习模型被证明容易受到对抗性攻击的影响。虽然在标准监督学习的背景下有许多研究关注模型的鲁棒性,但如何保护连续学习免受对抗性攻击的损害还没有得到良好的研究。为了填补这一研究空白,我们是第一个研究连续学习中的对抗鲁棒性的团队,提出了一种新颖的方法——称为任务感知边界增强(Task-Aware Boundary Augmentation,简称TABA),以增强连续学习模型的鲁棒性。通过在CIFAR-10和CIFAR-100上进行广泛的实验,我们展示了对抗性训练和TABA在抵御对抗性攻击中的有效性。