Image-based artistic rendering can synthesize a variety of expressive styles using algorithmic image filtering. In contrast to deep learning-based methods, these heuristics-based filtering techniques can operate on high-resolution images, are interpretable, and can be parameterized according to various design aspects. However, adapting or extending these techniques to produce new styles is often a tedious and error-prone task that requires expert knowledge. We propose a new paradigm to alleviate this problem: implementing algorithmic image filtering techniques as differentiable operations that can learn parametrizations aligned to certain reference styles. To this end, we present WISE, an example-based image-processing system that can handle a multitude of stylization techniques, such as watercolor, oil or cartoon stylization, within a common framework. By training parameter prediction networks for global and local filter parameterizations, we can simultaneously adapt effects to reference styles and image content, e.g., to enhance facial features. Our method can be optimized in a style-transfer framework or learned in a generative-adversarial setting for image-to-image translation. We demonstrate that jointly training an XDoG filter and a CNN for postprocessing can achieve comparable results to a state-of-the-art GAN-based method.
翻译:以图像为基础的艺术转换能够利用算法图像过滤法综合各种表达式。 与深层次学习方法相反,这些基于超自然的过滤技术可以在高分辨率图像上操作,是可以解释的,并且可以根据不同的设计方面进行参数化。 但是,改造或推广这些技术以产生新风格往往是一种乏味和容易出错的任务,需要专家知识。 我们提出一种新的模式来缓解这一问题: 应用算法图像过滤技术,作为不同操作,可以学习与某些参考风格相匹配的对称。 与此不同, 我们提出了WISE, 这是一种以实例为基础的图像处理系统,可以在一个共同的框架内处理多种石化技术,例如水色、油或卡通石化。 通过对全球和地方过滤参数参数化培训参数预测网络,我们可以同时调整对参考样式和图像内容的影响, 例如, 来增强面部位特征。 我们的方法可以在一个风格转换框架中进行优化,或者在基于图像到图像图像图像模拟的基因-对抗设置中学习。 我们展示了一种可比较性的G- G- Do 方法, 能够共同实现一个可比较的G- g- has- servial- sal- sal- sal- seral- sal- sal- slaction- sal- sal- sal