Deep learning models dealing with image understanding in real-world settings must be able to adapt to a wide variety of tasks across different domains. Domain adaptation and class incremental learning deal with domain and task variability separately, whereas their unified solution is still an open problem. We tackle both facets of the problem together, taking into account the semantic shift within both input and label spaces. We start by formally introducing continual learning under task and domain shift. Then, we address the proposed setup by using style transfer techniques to extend knowledge across domains when learning incremental tasks and a robust distillation framework to effectively recollect task knowledge under incremental domain shift. The devised framework (LwS, Learning with Style) is able to generalize incrementally acquired task knowledge across all the domains encountered, proving to be robust against catastrophic forgetting. Extensive experimental evaluation on multiple autonomous driving datasets shows how the proposed method outperforms existing approaches, which prove to be ill-equipped to deal with continual semantic segmentation under both task and domain shift.
翻译:涉及真实世界环境中图像理解的深层次学习模式必须能够适应不同领域的各种任务。 域适应和类级递增学习必须分别处理域和任务变异性, 而其统一解决方案仍然是一个尚未解决的问题。 我们共同处理问题的两个方面, 同时考虑到输入和标签空间的语义变化。 我们首先在任务和域变换下正式引入持续学习。 然后, 我们通过使用风格传导技术来应对拟议设置, 在学习增量任务和强大的蒸馏框架以有效重新收集在递增域变换下的任务知识时, 将知识推广到各个领域。 设计的框架( LwS, 学会用样式学习) 能够将逐步获得的任务知识推广到所遭遇到的所有领域, 证明能够有力地防止灾难性的遗忘。 对多个自主驱动数据集的广泛实验评估显示, 拟议的方法如何超越了现有方法, 而这些方法被证明无法在任务和域变换下处理持续解析分化问题。