The success of existing video super-resolution (VSR) algorithms stems mainly exploiting the temporal information from the neighboring frames. However, none of these methods have discussed the influence of the temporal redundancy in the patches with stationary objects and background and usually use all the information in the adjacent frames without any discrimination. In this paper, we observe that the temporal redundancy will bring adverse effect to the information propagation,which limits the performance of the most existing VSR methods. Motivated by this observation, we aim to improve existing VSR algorithms by handling the temporal redundancy patches in an optimized manner. We develop two simple yet effective plug and play methods to improve the performance of existing local and non-local propagation-based VSR algorithms on widely-used public videos. For more comprehensive evaluating the robustness and performance of existing VSR algorithms, we also collect a new dataset which contains a variety of public videos as testing set. Extensive evaluations show that the proposed methods can significantly improve the performance of existing VSR methods on the collected videos from wild scenarios while maintain their performance on existing commonly used datasets. The code is available at https://github.com/HYHsimon/Boosted-VSR.
翻译:现有的超分辨率视频算法(VSR)的成功主要在于利用邻近框架的时间信息,然而,这些方法都没有讨论固定对象和背景的补丁中时间冗余的影响,通常不加任何歧视地使用邻近框架的所有信息。在本文中,我们观察到时间冗余将对信息传播产生不利影响,从而限制现有最常用的VSR方法的性能。根据这一观察,我们的目标是改进现有的VSR算法,以优化的方式处理时间冗余补丁。我们开发了两种简单而有效的插头和播放方法,以提高现有本地和非本地基于传播的VSR算法在广泛使用的公共视频中的性能。为了更全面地评估现有VSR算法的健全性和性,我们还收集了一套新的数据集,其中包含了作为测试集的各种公共视频。广泛的评估表明,拟议方法可以大大改进从野生情景中收集的视频的现有VSR方法的性能,同时保持现有通用数据集的性能。该代码可在 https://githubab.com/HsyalVmon/somesomes) 上查到。