In this paper, we present a fast exemplar-based image colorization approach using color embeddings named Color2Embed. Generally, due to the difficulty of obtaining input and ground truth image pairs, it is hard to train a exemplar-based colorization model with unsupervised and unpaired training manner. Current algorithms usually strive to achieve two procedures: i) retrieving a large number of reference images with high similarity for preparing training dataset, which is inevitably time-consuming and tedious; ii) designing complicated modules to transfer the colors of the reference image to the target image, by calculating and leveraging the deep semantic correspondence between them (e.g., non-local operation), which is computationally expensive during testing. Contrary to the previous methods, we adopt a self-augmented self-reference learning scheme, where the reference image is generated by graphical transformations from the original colorful one whereby the training can be formulated in a paired manner. Second, in order to reduce the process time, our method explicitly extracts the color embeddings and exploits a progressive style feature Transformation network, which injects the color embeddings into the reconstruction of the final image. Such design is much more lightweight and intelligible, achieving appealing performance with fast processing speed.
翻译:在本文中,我们使用名为 Color2Embed 的彩色嵌入器展示了一个快速的缩略图图像色彩化方法。 一般来说,由于难以获得输入和地面真实图像配对,因此很难用不受监督和不设防的训练方式来训练一个基于原样的彩色化模型。 目前的算法通常努力实现两个程序 : (一) 检索大量非常相似的参考图像,用于编制培训数据集,这不可避免地需要时间和烦琐; (二) 设计复杂的模块,将引用图像的颜色转移到目标图像,方法是计算和利用它们之间的深层语义对应( 例如,非本地操作),在测试期间计算费用非常昂贵。 与先前的方法相反, 我们采用了一种自我推荐的自我参照学习计划, 参考图像是从原始彩色化的图形转换中生成的, 从而可以以对齐的方式编制培训数据集。 其次, 为了减少过程, 我们的方法明确提取颜色嵌入和利用它们之间深层次的语义性对应的对称, 。 如此快速化的造型的造型图像网络, 如此快速的造型化的造型, 的造型的造型的造型的造型的造型, 的造型的造型的造型是快速的造型, 。