Video restoration aims at restoring multiple high-quality frames from multiple low-quality frames. Existing video restoration methods generally fall into two extreme cases, i.e., they either restore all frames in parallel or restore the video frame by frame in a recurrent way, which would result in different merits and drawbacks. Typically, the former has the advantage of temporal information fusion. However, it suffers from large model size and intensive memory consumption; the latter has a relatively small model size as it shares parameters across frames; however, it lacks long-range dependency modeling ability and parallelizability. In this paper, we attempt to integrate the advantages of the two cases by proposing a recurrent video restoration transformer, namely RVRT. RVRT processes local neighboring frames in parallel within a globally recurrent framework which can achieve a good trade-off between model size, effectiveness, and efficiency. Specifically, RVRT divides the video into multiple clips and uses the previously inferred clip feature to estimate the subsequent clip feature. Within each clip, different frame features are jointly updated with implicit feature aggregation. Across different clips, the guided deformable attention is designed for clip-to-clip alignment, which predicts multiple relevant locations from the whole inferred clip and aggregates their features by the attention mechanism. Extensive experiments on video super-resolution, deblurring, and denoising show that the proposed RVRT achieves state-of-the-art performance on benchmark datasets with balanced model size, testing memory and runtime.
翻译:现有的视频恢复方法一般会分为两个极端,即,它们要么平行地恢复所有框架,要么以反复的方式恢复视频框架,从而产生不同的优点和缺点。通常,前者具有时间信息融合的优势;然而,它有较大的模型规模和大量记忆消耗;后者的模型规模较小,因为它在跨框架共享参数;然而,它缺乏远距离依赖性建模能力和平行性。在本文中,我们试图将这两个案例的优势综合起来,方法是提议一个经常性的视频恢复变异器,即RVRT。RVRT处理当地邻近框架,同时在全球经常性框架内实现不同的优点和缺点。一般而言,前者的优点是时间信息混杂在一起,而前者的优点则是时间信息大,而后者则是在模型的大小和大量记忆消耗量上;后者则由于在不同的框架模型中共享参数,因此它缺乏长期的平衡性能和相似性能。在不同的剪辑中,我们试图将这两个案例的可调整性注意力结合到一个经常性的视频恢复变异性变异性变式变式的变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式的变式变式变式变式变式变式的缩式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式变式