User videos shared on social media platforms usually suffer from degradations caused by unknown proprietary processing procedures, which means that their visual quality is poorer than that of the originals. This paper presents a new general video restoration framework for the restoration of user videos shared on social media platforms. In contrast to most deep learning-based video restoration methods that perform end-to-end mapping, where feature extraction is mostly treated as a black box, in the sense that what role a feature plays is often unknown, our new method, termed Video restOration through adapTive dEgradation Sensing (VOTES), introduces the concept of a degradation feature map (DFM) to explicitly guide the video restoration process. Specifically, for each video frame, we first adaptively estimate its DFM to extract features representing the difficulty of restoring its different regions. We then feed the DFM to a convolutional neural network (CNN) to compute hierarchical degradation features to modulate an end-to-end video restoration backbone network, such that more attention is paid explicitly to potentially more difficult to restore areas, which in turn leads to enhanced restoration performance. We will explain the design rationale of the VOTES framework and present extensive experimental results to show that the new VOTES method outperforms various state-of-the-art techniques both quantitatively and qualitatively. In addition, we contribute a large scale real-world database of user videos shared on different social media platforms. Codes and datasets are available at https://github.com/luohongming/VOTES.git
翻译:在社交媒体平台上分享的用户视频通常会因未知的专有处理程序而退化,这意味着其视觉质量比原版更差。本文为恢复社交媒体平台上共享的用户视频提供了一个新的一般视频恢复框架。与最深的基于学习的视频恢复方法形成对比,后者进行端到端绘图,特征提取大多被视为黑盒,其含义是,一个特质发挥什么作用往往不为人知,我们的新方法,即所谓的视频再更新,通过高级升级感化(VOTES),引入了退化特征图(DFM)的概念,以明确指导视频恢复进程。具体地说,我们首先对每个视频框架进行适应性地估计DFM,以提取代表恢复不同区域难度的特征。我们然后将DFM提供给一个动态神经网络(CN),以配置等级退化特征来调节终端到视频恢复主干线网络,从而更明确地关注可能更难恢复的区域,从而导致增强恢复功能的恢复过程。我们将在VOTES/视频数据库中解释设计如何设计并展示大量使用标准。