Most video super-resolution methods focus on restoring high-resolution video frames from low-resolution videos without taking into account compression. However, most videos on the web or mobile devices are compressed, and the compression can be severe when the bandwidth is limited. In this paper, we propose a new compression-informed video super-resolution model to restore high-resolution content without introducing artifacts caused by compression. The proposed model consists of three modules for video super-resolution: bi-directional recurrent warping, detail-preserving flow estimation, and Laplacian enhancement. All these three modules are used to deal with compression properties such as the location of the intra-frames in the input and smoothness in the output frames. For thorough performance evaluation, we conducted extensive experiments on standard datasets with a wide range of compression rates, covering many real video use cases. We showed that our method not only recovers high-resolution content on uncompressed frames from the widely-used benchmark datasets, but also achieves state-of-the-art performance in super-resolving compressed videos based on numerous quantitative metrics. We also evaluated the proposed method by simulating streaming from YouTube to demonstrate its effectiveness and robustness. The source codes and trained models are available at https://github.com/google-research/google-research/tree/master/comisr.
翻译:多数超分辨率视频方法侧重于从低分辨率视频中恢复高分辨率视频框,而不考虑压缩。 然而, 网络或移动设备上的大多数视频都是压缩的, 当带宽有限时压缩可能很严厉。 在本文中, 我们提出一个新的压缩知情视频超分辨率模型, 以恢复高分辨率内容, 而不引入压缩造成的工艺品。 提议的模型由三个视频超分辨率模块组成: 双向重复扭曲、 详细保存流量估计和拉普拉西语增强。 所有这三个模块都用于处理压缩属性, 如输入中内部框架的位置和输出框架中的平滑性。 为了彻底的绩效评估, 我们对标准数据集进行了广泛的实验, 其压缩率范围很广, 涵盖了许多真实的视频使用案例。 我们展示了我们的方法不仅从广泛使用的基准数据集中恢复了高分辨率框中的高分辨率内容, 而且还实现了基于大量定量测量的超标准/ 超专业级解压缩视频的状态性能。 我们还评估了通过模拟流压流/ 正在培训的/ 系统搜索/ 源码 展示了 。