Convolutional neural networks show remarkable results in classification but struggle with learning new things on the fly. We present a novel rehearsal-free approach, where a deep neural network is continually learning new unseen object categories without saving any data of prior sequences. Our approach is called RECALL, as the network recalls categories by calculating logits for old categories before training new ones. These are then used during training to avoid changing the old categories. For each new sequence, a new head is added to accommodate the new categories. To mitigate forgetting, we present a regularization strategy where we replace the classification with a regression. Moreover, for the known categories, we propose a Mahalanobis loss that includes the variances to account for the changing densities between known and unknown categories. Finally, we present a novel dataset for continual learning, especially suited for object recognition on a mobile robot (HOWS-CL-25), including 150,795 synthetic images of 25 household object categories. Our approach RECALL outperforms the current state of the art on CORe50 and iCIFAR-100 and reaches the best performance on HOWS-CL-25.
翻译:进化神经网络在分类方面表现出显著的结果,但与在飞行中学习新事物的斗争中表现出了惊人的结果。 我们展示了一种新的无排练方法, 深神经网络正在不断学习新的看不见物体类别, 而没有保存任何先前序列的数据。 我们的方法叫RECALL, 因为网络在培训新类别之前通过计算旧类别的日志来回忆分类。 然后在培训期间使用这些类别以避免改变旧类别。 对于每一个新的序列, 添加一个新的头来适应新的类别。 为了减轻忘记, 我们提出了一个正规化战略, 用来用回归取代分类。 此外, 对于已知的类别, 我们提出了一种马哈拉诺比损失, 其中包括因已知类别和未知类别之间密度变化而产生的差异。 最后, 我们提出了一个用于持续学习的新数据集, 特别适合移动机器人( HOWS-CL-25) 的天体识别, 包括 150, 795 25个家庭物体类别的合成图像。 我们的方法REAL 超越了目前关于 CORE50 和 iCIFAR- 100 的艺术状态, 并达到 HOS- CL- CL- 25 的最佳表现。