In class-incremental semantic segmentation we have no access to the labeled data of previous tasks. Therefore, when incrementally learning new classes, deep neural networks suffer from catastrophic forgetting of previously learned knowledge. To address this problem, we propose to apply a self-training approach that leverages unlabeled data, which is used for rehearsal of previous knowledge. Additionally, conflict reduction is proposed to resolve the conflicts of pseudo labels generated from both the old and new models. We show that maximizing self-entropy can further improve results by smoothing the overconfident predictions. Interestingly, in the experiments we show that the auxiliary data can be different from the training data and that even general-purpose but diverse auxiliary data can lead to large performance gains. The experiments demonstrate state-of-the-art results: obtaining a relative gain of up to 114% on Pascal-VOC 2012 and 8.5% on the more challenging ADE20K compared to previous state-of-the-art methods.
翻译:在分类的语义分解中,我们无法获取先前任务的标签数据。 因此, 当逐步学习新课程时, 深神经网络会因灾难性地忘记先前学到的知识而蒙受灾难性的灾难。 为了解决这一问题, 我们提议采用自我培训方法, 利用未标记的数据, 用于对先前的知识进行预演。 此外, 提议减少冲突, 以解决由旧模式和新模式产生的假标签冲突。 我们显示, 最大限度的自我消耗能通过平滑过度自信的预测来进一步改善结果。 有趣的是, 在实验中, 我们显示辅助数据可能不同于培训数据, 甚至普通但多样的辅助数据也能带来巨大的绩效收益。 实验展示了最新的结果: 在Pascal- VOC 2012 上获得高达114%的相对收益, 在挑战性更大的ADE20K 上获得比以往最先进的方法高的8.5%的相对收益 。