Video colorization is a challenging and highly ill-posed problem. Although recent years have witnessed remarkable progress in single image colorization, there is relatively less research effort on video colorization and existing methods always suffer from severe flickering artifacts (temporal inconsistency) or unsatisfying colorization performance. We address this problem from a new perspective, by jointly considering colorization and temporal consistency in a unified framework. Specifically, we propose a novel temporally consistent video colorization framework (TCVC). TCVC effectively propagates frame-level deep features in a bidirectional way to enhance the temporal consistency of colorization. Furthermore, TCVC introduces a self-regularization learning (SRL) scheme to minimize the prediction difference obtained with different time steps. SRL does not require any ground-truth color videos for training and can further improve temporal consistency. Experiments demonstrate that our method can not only obtain visually pleasing colorized video, but also achieve clearly better temporal consistency than state-of-the-art methods.
翻译:虽然近年来在单一图像色化方面取得了显著进展,但关于视频色化的研究相对较少,而且现有方法总是受到严重闪烁的文物(时态不一致)或不满意的色彩化性能的影响。我们从新的角度,通过在一个统一的框架内共同考虑颜色化和时间一致性,从新的角度解决这个问题。具体地说,我们提议了一个新的时间一致的视频色化框架(TCVC)。TCVC以双向方式有效传播框架级深层特征,以提高色彩化的时间一致性。此外,TCVC引入了一种自我常规化学习(SRL)计划,以尽量减少不同时间步骤取得的预测差异。SRL不需要任何地面色彩视频进行培训,还可以进一步提高时间一致性。实验表明,我们的方法不仅可以获得视觉上令人愉快的色彩化视频,而且实现明显更好的时间一致性。