Face video super-resolution algorithm aims to reconstruct realistic face details through continuous input video sequences. However, existing video processing algorithms usually contain redundant parameters to guarantee different super-resolution scenes. In this work, we focus on super-resolution of face areas in original video scenes, while rest areas are interpolated. This specific super-resolved task makes it possible to cut redundant parameters in general video super-resolution networks. We construct a dataset consisting entirely of face video sequences for network training and evaluation, and conduct hyper-parameter optimization in our experiments. We use three combined strategies to optimize the network parameters with a simultaneous train-evaluation method to accelerate optimization process. Results show that simultaneous train-evaluation method improves the training speed and facilitates the generation of efficient networks. The generated network can reduce at least 52.4% parameters and 20.7% FLOPs, achieve better performance on PSNR, SSIM compared with state-of-art video super-resolution algorithms. When processing 36x36x1x3 input video frame sequences, the efficient network provides 47.62 FPS real-time processing performance. We name our proposal as hyper-parameter optimization for face Video Super-Resolution (HO-FVSR), which is open-sourced at https://github.com/yphone/efficient-network-for-face-VSR.
翻译:视频超分辨率加密算法旨在通过连续输入视频序列重建现实的面孔细节。然而,现有的视频处理算法通常包含冗余参数,以保障不同的超分辨率场景。 在这项工作中,我们侧重于原始视频场面的超分辨率,而休息区是内插的。这一特定的超级溶解任务使得能够削减一般视频超分辨率网络的冗余参数。我们建造了一个数据集,完全由面相序列组成,用于网络培训和评估,并在我们的实验中进行超光谱优化。我们使用三个组合战略,优化网络参数,同时采用培训评价方法来加速优化进程。结果显示,同时进行的培训评价方法提高了培训速度,便利了高效网络的生成。生成的网络可以减少至少52.4%的参数和20.7%的FLOPs,使PSNR、SSIM和最先进的视频超分辨率算法的性能得到更好的表现。当处理36x1x3输入的视频框架序列时,高效的网络提供了47.62 FPS实时处理性工作表现。我们把建议命名为超度SR-FI/S-SQROFIma-S-S-Spressimalimalimal-SULA/FIADRA/SUDRApractal-S-SUDRApral-S-S-S-S-S-S-SUDRIGRADRA/FDRADRADRADRA/FDRA/FDRAGRApral-S-S-S-S-S-SBAR_GRDRGRMRGMRMUDR。