Despite the quality improvement brought by the recent methods, video super-resolution (SR) is still very challenging, especially for videos that are low-light and noisy. The current best solution is to subsequently employ best models of video SR, denoising, and illumination enhancement, but doing so often lowers the image quality, due to the inconsistency between the models. This paper presents a new parametric representation called the Deep Parametric 3D Filters (DP3DF), which incorporates local spatiotemporal information to enable simultaneous denoising, illumination enhancement, and SR efficiently in a single encoder-and-decoder network. Also, a dynamic residual frame is jointly learned with the DP3DF via a shared backbone to further boost the SR quality. We performed extensive experiments, including a large-scale user study, to show our method's effectiveness. Our method consistently surpasses the best state-of-the-art methods on all the challenging real datasets with top PSNR and user ratings, yet having a very fast run time.
翻译:尽管最近采用的方法提高了质量,但视频超分辨率(SR)仍然非常具有挑战性,特别是对于低光度和吵闹的视频而言。目前的最佳解决办法是随后采用视频SR的最佳模式,去除光化和光化增强,但由于模型之间的不一致,这样做往往降低图像质量。本文提出了一个新的参数表示,称为深参数3D过滤器(DP3DDF),它包含当地的空间时空信息,以便在一个单一的编码器和分解器网络中同时进行分解、加强照明和高效的SR。此外,通过共用主干线,与DP3DF共同学习动态剩余框架,以进一步提高SR的质量。我们进行了广泛的实验,包括大规模用户研究,以显示我们的方法的有效性。我们的方法始终超过所有具有挑战性的实际数据集上的最佳状态方法,拥有最高PSNR和用户评级,但运行时间非常迅速。