Researchers have explored various ways to generate realistic images from freehand sketches, e.g., for objects and human faces. However, how to generate realistic human body images from sketches is still a challenging problem. It is, first because of the sensitivity to human shapes, second because of the complexity of human images caused by body shape and pose changes, and third because of the domain gap between realistic images and freehand sketches. In this work, we present DeepPortraitDrawing, a deep generative framework for converting roughly drawn sketches to realistic human body images. To encode complicated body shapes under various poses, we take a local-to-global approach. Locally, we employ semantic part auto-encoders to construct part-level shape spaces, which are useful for refining the geometry of an input pre-segmented hand-drawn sketch. Globally, we employ a cascaded spatial transformer network to refine the structure of body parts by adjusting their spatial locations and relative proportions. Finally, we use a global synthesis network for the sketch-to-image translation task, and a face refinement network to enhance facial details. Extensive experiments have shown that given roughly sketched human portraits, our method produces more realistic images than the state-of-the-art sketch-to-image synthesis techniques.
翻译:研究人员探索了从自由手草图中产生现实图像的各种方法,例如,对象和人的脸。然而,如何从草图中产生现实人体图像仍然是一个棘手的问题。首先,由于人体形状的敏感性,其次,由于人体形状的复杂性和变化,其次,由于人体图像的复杂性,其次,由于现实图像和自由手草图之间的领域差距,其次,其次,由于现实图像和自由手草图之间的领域差距,研究人员探索了各种方式,从自由手草图产生现实的图像。在这项工作中,我们展示了深度深度深深深浅的草图绘制框架,将粗略绘制的草图转换为现实人体图像。为了将各种姿势下的复杂身体形状编码,我们采取了一种地方到全球的方法。从局部到全球的方法,我们使用语义部分的自动构件来构建部分的形状空间空间空间,以构建部分形状空间空间空间空间的形状空间结构,从而调整其空间位置和相对比例。我们使用全球合成网络来将素描图转换成,我们使用一个面精细的图像网络来改进面面面面面图象的图像,以显示比图像的图像化方法展示更精确的图像。