Over parameterization is a common technique in deep learning to help models learn and generalize sufficiently to the given task; nonetheless, this often leads to enormous network structures and consumes considerable computing resources during training. Recent powerful transformer-based deep learning models on vision tasks usually have heavy parameters and bear training difficulty. However, many dense-prediction low-level computer vision tasks, such as rain streak removing, often need to be executed on devices with limited computing power and memory in practice. Hence, we introduce a recursive local window-based self-attention structure with residual connections and propose deraining a recursive transformer (DRT), which enjoys the superiority of the transformer but requires a small amount of computing resources. In particular, through recursive architecture, our proposed model uses only 1.3% of the number of parameters of the current best performing model in deraining while exceeding the state-of-the-art methods on the Rain100L benchmark by at least 0.33 dB. Ablation studies also investigate the impact of recursions on derain outcomes. Moreover, since the model contains no deliberate design for deraining, it can also be applied to other image restoration tasks. Our experiment shows that it can achieve competitive results on desnowing. The source code and pretrained model can be found at https://github.com/YC-Liang/DRT.
翻译:超度参数化是深层次学习的一种常见技术,有助于模型学习并充分概括给定任务;然而,这往往导致巨大的网络结构,在培训过程中消耗大量计算资源;最近强大的基于变压器的深层次视觉任务学习模型通常具有沉重的参数,并有培训困难;然而,许多密集的低层次计算机视觉任务,如雨水脱线,往往需要放在计算力和记忆力有限的装置上执行。因此,我们引入了一个具有剩余连接的基于窗口的重复式自控结构,并提议对循环变压器(DRT)进行排解,该变压器享有变压器的优势,但需要少量的计算资源。特别是,通过循环结构,我们提议的模型只使用目前最佳脱线模型参数的1.3%,同时超过Rain100L基准的状态方法,至少需要0.33 dB。 减压研究还调查再生对脱线结果的影响。此外,由于该模型没有为脱线/脱线设计,因此它也可以应用到其他图像恢复源。我们提议的模型可以进行测试。