This paper studies the problem of real-world video super-resolution (VSR) for animation videos, and reveals three key improvements for practical animation VSR. First, recent real-world super-resolution approaches typically rely on degradation simulation using basic operators without any learning capability, such as blur, noise, and compression. In this work, we propose to learn such basic operators from real low-quality animation videos, and incorporate the learned ones into the degradation generation pipeline. Such neural-network-based basic operators could help to better capture the distribution of real degradations. Second, a large-scale high-quality animation video dataset, AVC, is built to facilitate comprehensive training and evaluations for animation VSR. Third, we further investigate an efficient multi-scale network structure. It takes advantage of the efficiency of unidirectional recurrent networks and the effectiveness of sliding-window-based methods. Thanks to the above delicate designs, our method, AnimeSR, is capable of restoring real-world low-quality animation videos effectively and efficiently, achieving superior performance to previous state-of-the-art methods. Codes and models are available at https://github.com/TencentARC/AnimeSR.
翻译:本文研究动画视频真实世界超分辨率(VSR)的问题,并揭示了实用动画VSR的三个关键改进点。首先,最近现实世界超分辨率方法通常依赖使用没有任何学习能力的基本操作者进行降解模拟,而没有模糊、噪音和压缩等任何学习能力。在这项工作中,我们提议从真实的低质量动画视频中学习这些基本操作者,并将所学到的动画视频纳入退化生成管道。这些神经网络基础操作者可以帮助更好地捕捉真实退化的分布。第二,建立一个大型高质量动画视频数据集(AVC),以促进动画VSR的全面培训和评估。第三,我们进一步调查一个高效的多尺度网络结构。它利用单向经常性网络的效率以及滑动窗口方法的有效性。由于上述微妙的设计,我们的方法“AnimeSR”能够有效和高效地恢复真实的低质量动画视频的分布,实现与以往的州-艺术方法的优异性。在http://github. Ancom/TentRC/ acentrent 方法中可以找到代码和模型。