The recent advances in continual (incremental or lifelong) learning have concentrated on the prevention of forgetting that can lead to catastrophic consequences, but there are two outstanding challenges that must be addressed. The first is the evaluation of the robustness of the proposed methods. The second is ensuring the security of learned tasks remains largely unexplored. This paper presents a comprehensive study of the susceptibility of the continually learned tasks (including both current and previously learned tasks) that are vulnerable to forgetting. Such vulnerability of tasks against adversarial attacks raises profound issues in data integrity and privacy. We consider all three scenarios (i.e, task-incremental leaning, domain-incremental learning and class-incremental learning) of continual learning and explore three regularization-based experiments, three replay-based experiments, and one hybrid technique based on the reply and exemplar approach. We examine the robustness of these methods. In particular, we consider cases where we demonstrate that any class belonging to the current or previously learned tasks is prone to misclassification. Our observations, we identify potential limitations in continual learning approaches against adversarial attacks. Our empirical study recommends that the research community consider the robustness of the proposed continual learning approaches and invest extensive efforts in mitigating catastrophic forgetting.
翻译:最近不断(高等或终身)学习的进展集中于防止忘却可能导致灾难性后果的忘却,但有两个有待解决的挑战。第一个是评价拟议方法的稳健性。第二个是确保学习任务的安全性,基本上尚未探索。本文件全面研究容易被遗忘的不断学习的任务(包括当前和以往学到的任务)的易感性。这种在对抗性攻击面前的脆弱性在数据完整性和隐私方面引起深刻的问题。我们认为,持续学习的所有三种情景(即任务偏瘦、领域偏重学习和班级学习),以及探索三个基于正规化的实验、三个基于重现的实验以及一个基于答复和特例的混合技术。我们审视这些方法的稳健性。我们特别考虑一些案例,在这些案例中,我们证明属于当前或以前学到的任务的任何类别都容易被错误分类。我们的观察发现,我们在不断学习对抗性攻击的方法方面可能存在局限性。我们的经验研究建议,研究界应考虑如何持续地研究减缓灾难性努力。