Human re-rendering from a single image is a starkly under-constrained problem, and state-of-the-art algorithms often exhibit undesired artefacts, such as over-smoothing, unrealistic distortions of the body parts and garments, or implausible changes of the texture. To address these challenges, we propose a new method for neural re-rendering of a human under a novel user-defined pose and viewpoint, given one input image. Our algorithm represents body pose and shape as a parametric mesh which can be reconstructed from a single image and easily reposed. Instead of a colour-based UV texture map, our approach further employs a learned high-dimensional UV feature map to encode appearance. This rich implicit representation captures detailed appearance variation across poses, viewpoints, person identities and clothing styles better than learned colour texture maps. The body model with the rendered feature maps is fed through a neural image-translation network that creates the final rendered colour image. The above components are combined in an end-to-end-trained neural network architecture that takes as input a source person image, and images of the parametric body model in the source pose and desired target pose. Experimental evaluation demonstrates that our approach produces higher quality single image re-rendering results than existing methods.
翻译:人类从单一图像中重新产生是一个明显不足的问题,而最先进的算法往往展示出不理想的手工艺品,例如过度移动、不切实际的人体部件和服装扭曲,或无法令人相信的纹理变化。为了应对这些挑战,我们提出了一个新方法,在一个新的用户定义的图像和观点下,根据一个输入图像,在一个新的用户定义的图像和观点下,对一个人进行神经再造再造,我们的算法代表人体的形状和形状,作为一个参数模型模型,可以从单一图像中重建,并容易重新保存。我们的方法不是使用基于颜色的紫外线纹图,而是进一步使用一个高层次的紫外线特征图来编码外观。这一丰富的隐含性图示体现了比学习的彩色纹地图更好的详细的外观变化。配制特征图的体型模型通过一个神经图像转换网络来提供,形成最终的彩色图像。以上各组成部分被合并成一个从最后到最后的神经模型网络结构,它作为输入源头的高级UV特征图示图解,而不是作为输入源头的图像,并展示现有的单一图像。