Deep networks allow to obtain outstanding results in semantic segmentation, however they need to be trained in a single shot with a large amount of data. Continual learning settings where new classes are learned in incremental steps and previous training data is no longer available are challenging due to the catastrophic forgetting phenomenon. Existing approaches typically fail when several incremental steps are performed or in presence of a distribution shift of the background class. We tackle these issues by recreating no longer available data for the old classes and outlining a content inpainting scheme on the background class. We propose two sources for replay data. The first resorts to a generative adversarial network to sample from the class space of past learning steps. The second relies on web-crawled data to retrieve images containing examples of old classes from online databases. In both scenarios no samples of past steps are stored, thus avoiding privacy concerns. Replay data are then blended with new samples during the incremental steps. Our approach, RECALL, outperforms state-of-the-art methods.
翻译:深网络允许在语义分割方面获得突出结果, 但是它们需要用大量数据进行一次性培训。 由于灾难性的遗忘现象, 持续学习环境由于不断学习, 以渐进步骤学习新班级, 并不再提供先前的培训数据, 因而具有挑战性。 现有的方法通常在进行若干渐进步骤时或背景类分布变化时失败。 我们通过为旧班重新生成不再可用的数据, 并在背景类中勾画内容来解决这些问题。 我们提出了两个重现数据的来源。 我们首先使用基因化对抗网络, 从过去的学习步骤的类空间取样。 第二个依靠网络生成的数据从在线数据库中检索含有旧班级实例的图像。 在这两种情况下,过去步骤的样本都不会储存,从而避免隐私问题。 然后在渐进步骤中将数据与新样本混合在一起。 我们的方法, RECALL, 超越了最新的方法 。