Character line drawing synthesis can be formulated as a special case of image-to-image translation problem that automatically manipulates the photo-to-line drawing style transformation. In this paper, we present the first generative adversarial network based end-to-end trainable translation architecture, dubbed P2LDGAN, for automatic generation of high-quality character drawings from input photos/images. The core component of our approach is the joint geometric-semantic-driven generator, which uses our well-designed cross-scale dense skip connections framework to embed learned geometric and semantic information for generating delicate line drawings. In order to support the evaluation of our model, we release a new dataset including 1,532 well-matched pairs of freehand character line drawings as well as corresponding character images/photos, where these line drawings with diverse styles are manually drawn by skilled artists. Extensive experiments on our introduced dataset demonstrate the superior performance of our proposed models against the state-of-the-art approaches in terms of quantitative, qualitative and human evaluations. Our code, models and dataset is available at https://github.com/cnyvfang/P2LDGAN.
翻译:字符绘图合成可以作为图像到图像翻译问题的特例。 该特例将自动操纵光到线绘图样式的转换。 在本文中, 我们展示了第一个基于端到端可训练翻译结构( 称为 P2LDGAN ), 用于自动生成来自输入照片/ 图像的高质量字符图画。 我们的方法的核心组成部分是联合几何- 语义驱动的生成器, 它使用我们精心设计的跨尺度密集跳过连接框架, 嵌入学习的几何和语义信息, 以生成微妙的线图。 为了支持对模型的评估, 我们发布了一个新的数据集, 包括1,532对相配齐的自由手字符线图纸以及相应的字符图象/ 图片/ 图片。 这些具有不同风格的线图画是由熟练艺术家手工绘制的。 在我们推出的数据集上进行的广泛实验, 展示了我们拟议模型在定量、 定性和人文评估方面与最先进的方法相比的优异性表现。 我们的代码、 模型和数据设置可以在 http://gifith/ AN 。