Spatial resolution adaptation is a technique which has often been employed in video compression to enhance coding efficiency. This approach encodes a lower resolution version of the input video and reconstructs the original resolution during decoding. Instead of using conventional up-sampling filters, recent work has employed advanced super-resolution methods based on convolutional neural networks (CNNs) to further improve reconstruction quality. These approaches are usually trained to minimise pixel-based losses such as Mean-Squared Error (MSE), despite the fact that this type of loss metric does not correlate well with subjective opinions. In this paper, a perceptually-inspired super-resolution approach (M-SRGAN) is proposed for spatial up-sampling of compressed video using a modified CNN model, which has been trained using a generative adversarial network (GAN) on compressed content with perceptual loss functions. The proposed method was integrated with HEVC HM 16.20, and has been evaluated on the JVET Common Test Conditions (UHD test sequences) using the Random Access configuration. The results show evident perceptual quality improvement over the original HM 16.20, with an average bitrate saving of 35.6% (Bj{\o}ntegaard Delta measurement) based on a perceptual quality metric, VMAF.
翻译:空间分辨率适应是一种技术,通常用于视频压缩,以提高编码效率。这个方法将输入视频的较低分辨率编码成一个较低的分辨率版本,并在解码过程中重建原始分辨率。最近的工作不是使用传统的上层抽样过滤器,而是采用了基于进化神经网络(CNN)的高级超分辨率方法,以进一步提高重建质量。这些方法通常经过培训,以尽量减少像素损失,例如中度错误(MSE),尽管这种类型的损失计量器与主观观点没有很好的关系。在本文中,提议采用一种感知性启发超分辨率方法(M-SRGAN),用于使用经过修改的CNN模型对压缩视频进行空间上层取样,该模型已经用一种带有感知性损失功能的压缩内容的基因式对抗网络(GAN)进行了培训。拟议方法与HVC HH 16.20 集成,并使用随机访问配置的JVVETT通用测试条件(UHD测试序列)进行了评估。结果显示,以35-FNA标准质量衡量标准第16级(BR)明显改进了原标准。