Continual Learning is a step towards lifelong intelligence where models continuously learn from recently collected data without forgetting previous knowledge. Existing continual learning approaches mostly focus on image classification in the class-incremental setup with clear task boundaries and unlimited computational budget. This work explores Online Domain-Incremental Continual Segmentation~(ODICS), a real-world problem that arises in many applications, \eg, autonomous driving. In ODICS, the model is continually presented with batches of densely labeled images from different domains; computation is limited and no information about the task boundaries is available. In autonomous driving, this may correspond to the realistic scenario of training a segmentation model over time on a sequence of cities. We analyze several existing continual learning methods and show that they do not perform well in this setting despite working well in class-incremental segmentation. We propose SimCS, a parameter-free method complementary to existing ones that leverages simulated data as a continual learning regularizer. Extensive experiments show consistent improvements over different types of continual learning methods that use regularizers and even replay.
翻译:持续学习是向终身智能迈出的一步,在这个过程中,模型不断从最近收集的数据中学习,同时不忘先前的知识。现有的持续学习方法主要侧重于在任务界限和无限计算预算的分类结构中进行图像分类。这项工作探索了在线域-内部连续分割~(ODICS),这是在许多应用中产生的一个现实世界问题。在ODICS中,模型不断展示来自不同领域的一组密集标签图像;计算是有限的,没有关于任务界限的信息。在自主驱动中,这可能与在一段时间内对城市序列进行分解模型培训的现实情景相对应。我们分析了一些现有的持续学习方法,并表明尽管在分层分割方面运作良好,但是在这种环境中效果不佳。我们建议SimCS是一种无参数的方法,它补充现有的方法,利用模拟数据作为不断学习的常规。广泛的实验显示,在使用规范器甚至重弹的各种持续学习方法方面不断改进。