Deep neural networks are prone to catastrophic forgetting when incrementally trained on new classes or new tasks as adaptation to the new data leads to a drastic decrease of the performance on the old classes and tasks. By using a small memory for rehearsal and knowledge distillation, recent methods has proven to be effective to mitigate catastrophic forgetting. However due to the limited size of the memory, large imbalance between the amount of data available for the old and new classes still remains which results in a deterioration of the overall accuracy of the model. To address this problem, we propose the use of the Balanced Softmax Cross-Entropy loss and show that it can be combined with exiting methods for incremental learning to improve their performances while also decreasing the computational cost of the training procedure in some cases. Complete experiments on the competitive ImageNet, subImageNet and CIFAR100 datasets show states-of-the-art results.
翻译:深神经网络很容易被灾难性地遗忘,因为随着适应新数据而使新课程或新任务逐步培训,旧课程和新任务的业绩急剧下降。通过利用小记忆进行排练和知识蒸馏,最近的方法已证明能够有效地减轻灾难性的遗忘。然而,由于记忆的体积有限,旧类和新类可用数据数量之间的巨大不平衡仍然导致模型总体准确性下降。为了解决这一问题,我们提议使用平衡软体跨反渗透损失,并表明它可以与正在采用的渐进学习方法相结合,以改善其业绩,同时在某些情况下还降低了培训程序的计算成本。关于竞争性图像网络、亚图象网和CIFAR100数据集的完整实验显示了最新结果。