Many deep learning based video compression artifact removal algorithms have been proposed to recover high-quality videos from low-quality compressed videos. Recently, methods were proposed to mine spatiotemporal information via utilizing multiple neighboring frames as reference frames. However, these post-processing methods take advantage of adjacent frames directly, but neglect the information of the video itself, which can be exploited. In this paper, we propose an effective reference frame proposal strategy to boost the performance of the existing multi-frame approaches. Besides, we introduce a loss based on fast Fourier transformation~(FFT) to further improve the effectiveness of restoration. Experimental results show that our method achieves better fidelity and perceptual performance on MFQE 2.0 dataset than the state-of-the-art methods. And our method won Track 1 and Track 2, and was ranked the 2nd in Track 3 of NTIRE 2021 Quality enhancement of heavily compressed videos Challenge.
翻译:许多基于深层学习的视频压缩工艺品清除算法被提议从低质量压缩视频中回收高质量的视频。最近,有人提议通过使用多个相邻框架作为参照框架来挖掘时空信息。然而,这些后处理方法直接利用相邻框架,但忽视了视频本身的信息,可以加以利用。在本文中,我们提议了一个有效的参考框架建议战略,以提高现有多框架方法的性能。此外,我们引入了基于快速Fouriere转换~(FFT)的亏损,以进一步提高恢复的效果。实验结果显示,我们的方法在MFQE 2.0数据集上比最新方法更忠实,在视觉上表现更好。我们的方法赢得了第1轨和第2轨,并在NTIRE 2021 高压缩程度视频挑战第3轨中排名第二。