Continual learning is an emerging topic in the field of deep learning, where a model is expected to learn continuously for new upcoming tasks without forgetting previous experiences. This field has witnessed numerous advancements, but few works have been attempted in the direction of image restoration. Handling large image sizes and the divergent nature of various degradation poses a unique challenge in the restoration domain. However, existing works require heavily engineered architectural modifications for new task adaptation, resulting in significant computational overhead. Regularization-based methods are unsuitable for restoration, as different restoration challenges require different kinds of feature processing. In this direction, we propose a simple modification of the convolution layer to adapt the knowledge from previous restoration tasks without touching the main backbone architecture. Therefore, it can be seamlessly applied to any deep architecture without any structural modifications. Unlike other approaches, we demonstrate that our model can increase the number of trainable parameters without significantly increasing computational overhead or inference time. Experimental validation demonstrates that new restoration tasks can be introduced without compromising the performance of existing tasks. We also show that performance on new restoration tasks improves by adapting the knowledge from the knowledge base created by previous restoration tasks. The code is available at https://github.com/aupendu/continual-restore.
翻译:连续学习是深度学习领域的一个新兴课题,其要求模型能够持续学习新任务而不遗忘先前经验。该领域已取得诸多进展,但在图像复原方向的研究尚少。处理大尺寸图像及多种退化的差异性为复原领域带来了独特挑战。然而,现有方法需要针对新任务适应进行大量工程化的架构修改,导致显著的计算开销。基于正则化的方法不适用于复原任务,因为不同的复原挑战需要不同类型的特征处理。为此,我们提出一种卷积层的简单改进方法,使其能够适配先前复原任务的知识,而无需改动主干网络架构。因此,该方法可无缝应用于任何深度架构,无需结构修改。与其他方法不同,我们证明该模型能够增加可训练参数数量,而不会显著提升计算开销或推理时间。实验验证表明,引入新复原任务不会影响现有任务的性能。我们还证明,通过适配由先前复原任务构建的知识库中的知识,新复原任务的性能得到提升。代码发布于 https://github.com/aupendu/continual-restore。