In most video platforms, such as Youtube, and TikTok, the played videos usually have undergone multiple video encodings such as hardware encoding by recording devices, software encoding by video editing apps, and single/multiple video transcoding by video application servers. Previous works in compressed video restoration typically assume the compression artifacts are caused by one-time encoding. Thus, the derived solution usually does not work very well in practice. In this paper, we propose a new method, temporal spatial auxiliary network (TSAN), for transcoded video restoration. Our method considers the unique traits between video encoding and transcoding, and we consider the initial shallow encoded videos as the intermediate labels to assist the network to conduct self-supervised attention training. In addition, we employ adjacent multi-frame information and propose the temporal deformable alignment and pyramidal spatial fusion for transcoded video restoration. The experimental results demonstrate that the performance of the proposed method is superior to that of the previous techniques. The code is available at https://github.com/icecherylXuli/TSAN.
翻译:在Youtube和TikTok等大多数视频平台上,播放的视频通常经过多种视频编码,例如通过录制装置的硬件编码、通过视频编辑应用程序的软件编码和通过视频应用服务器的单/多视频转换编码。压缩视频恢复的以往工作通常假定压缩文物是一次性编码造成的。因此,衍生的解决方案通常在实际中不十分有效。在本文中,我们建议了一种新的方法,即时间空间辅助网络(TSAN),用于转换视频恢复。我们的方法考虑了视频编码和转换编码之间的独特特征,我们认为初始浅编码视频是中间标签,以协助网络进行自我监督的注意培训。此外,我们使用相邻的多框架信息,并提出可调整的时间和金字塔空间融合,用于转码视频恢复。实验结果显示,拟议方法的性能优于以往技术。该代码可在https://github.com/icecherylXuli/TSAN查阅。