Class-Incremental Learning updates a deep classifier with new categories while maintaining the previously observed class accuracy. Regularizing the neural network weights is a common method to prevent forgetting previously learned classes while learning novel ones. However, existing regularizers use a constant magnitude throughout the learning sessions, which may not reflect the varying levels of difficulty of the tasks encountered during incremental learning. This study investigates the necessity of adaptive regularization in Class-Incremental Learning, which dynamically adjusts the regularization strength according to the complexity of the task at hand. We propose a Bayesian Optimization-based approach to automatically determine the optimal regularization magnitude for each learning task. Our experiments on two datasets via two regularizers demonstrate the importance of adaptive regularization for achieving accurate and less forgetful visual incremental learning.
翻译:类别增量学习是通过添加新类别来更新深度分类器,并保持先前观察到的类别准确性的一种方法。正则化神经网络权重是一种常见的方法,可防止在学习新类别的同时遗忘先前学习的类别。然而,现有的正则化器在学习会话期间使用恒定的幅度,这可能不反映遇到的任务难度的变化水平。本研究调查了类别增量学习中自适应正则化的必要性,这种方法根据当前任务的复杂性动态调整正则化强度。我们提出了一种基于贝叶斯优化的方法来自动确定每个学习任务的最佳正则化幅度。通过两个正则化器分别在两个数据集上的实验表明自适应正则化对于实现准确且不易遗忘的视觉增量学习非常重要。