We propose a hybrid recurrent Video Colorization with Hybrid Generative Adversarial Network (VCGAN), an improved approach to video colorization using end-to-end learning. The VCGAN addresses two prevalent issues in the video colorization domain: Temporal consistency and unification of colorization network and refinement network into a single architecture. To enhance colorization quality and spatiotemporal consistency, the mainstream of generator in VCGAN is assisted by two additional networks, i.e., global feature extractor and placeholder feature extractor, respectively. The global feature extractor encodes the global semantics of grayscale input to enhance colorization quality, whereas the placeholder feature extractor acts as a feedback connection to encode the semantics of the previous colorized frame in order to maintain spatiotemporal consistency. If changing the input for placeholder feature extractor as grayscale input, the hybrid VCGAN also has the potential to perform image colorization. To improve the consistency of far frames, we propose a dense long-term loss that smooths the temporal disparity of every two remote frames. Trained with colorization and temporal losses jointly, VCGAN strikes a good balance between color vividness and video continuity. Experimental results demonstrate that VCGAN produces higher-quality and temporally more consistent colorful videos than existing approaches.
翻译:我们建议使用混合生成反反向网络(VCGAN)来混合经常性视频色彩化,这是利用端到端学习来改进视频色彩化的方法。 VCGAN处理视频色彩化域的两个普遍问题:色彩化网络的时间一致性和色彩化网络的统一以及将网络改进成一个单一的架构。为了提高色彩化质量和时空一致性,VCGAN发电机的主流由另外两个网络(即全球地物提取器和占位器特征提取器)协助。全球地物提取器编码了灰度输入的全球语义,以提高颜色化质量,而占位器特征提取器则作为反馈链接,将以前的色彩化框架的语义编码成色彩化和色彩化网络的统一,以保持时态一致性。如果将占位器特性提取器的输入作为灰度化输入,混合VCGANAN也有可能进行图像色化。为了提高远框的一致性,我们建议全球地物提取一个密集的长期损失,以平滑两边远程框架的时间差异来提高颜色质量,而用色化和时时空的图像质量则比VC更一致地展示了目前摄像质量。