Class-incremental learning for semantic segmentation (CiSS) is presently a highly researched field which aims at updating a semantic segmentation model by sequentially learning new semantic classes. A major challenge in CiSS is overcoming the effects of catastrophic forgetting, which describes the sudden drop of accuracy on previously learned classes after the model is trained on a new set of classes. Despite latest advances in mitigating catastrophic forgetting, the underlying causes of forgetting specifically in CiSS are not well understood. Therefore, in a set of experiments and representational analyses, we demonstrate that the semantic shift of the background class and a bias towards new classes are the major causes of forgetting in CiSS. Furthermore, we show that both causes mostly manifest themselves in deeper classification layers of the network, while the early layers of the model are not affected. Finally, we demonstrate how both causes are effectively mitigated utilizing the information contained in the background, with the help of knowledge distillation and an unbiased cross-entropy loss.
翻译:语义分解(CISS)是目前一个高度研究的领域,目的是通过连续学习新的语义类,更新语义分解模式。CISS的一项重大挑战是克服灾难性的遗忘的影响,它描述了在对模型进行新类培训后,先前学过班级的准确性突然下降。尽管在减轻灾难性遗忘方面最近取得了进展,但CISS具体遗忘的根本原因并没有得到很好的理解。因此,在一系列实验和代表分析中,我们证明背景类的语义转变和偏向新类是CIS中遗忘的主要原因。此外,我们表明这两种原因大多表现在网络的更深分类层,而模型的早期层没有受到影响。最后,我们证明如何利用背景中的信息,借助知识蒸馏和公平跨热带损失,有效地缓解这两种原因。