Generating novel, yet realistic, images of persons is a challenging task due to the complex interplay between the different image factors, such as the foreground, background and pose information. In this work, we aim at generating such images based on a novel, two-stage reconstruction pipeline that learns a disentangled representation of the aforementioned image factors and generates novel person images at the same time. First, a multi-branched reconstruction network is proposed to disentangle and encode the three factors into embedding features, which are then combined to re-compose the input image itself. Second, three corresponding mapping functions are learned in an adversarial manner in order to map Gaussian noise to the learned embedding feature space, for each factor respectively. Using the proposed framework, we can manipulate the foreground, background and pose of the input image, and also sample new embedding features to generate such targeted manipulations, that provide more control over the generation process. Experiments on Market-1501 and Deepfashion datasets show that our model does not only generate realistic person images with new foregrounds, backgrounds and poses, but also manipulates the generated factors and interpolates the in-between states. Another set of experiments on Market-1501 shows that our model can also be beneficial for the person re-identification task.
翻译:由于不同图像因素(如前景、背景和显示信息)之间的复杂相互作用,生成新的、现实的人图像是一项艰巨的任务,因为不同图像因素(如前景、背景和显示信息)之间的相互作用十分复杂。在这项工作中,我们的目标是在两阶段重建管道的基础上生成这些图像,以学习上述图像因素的分解表达方式,同时生成新的个人图像。首先,建议建立一个多层重建网络,将三个因素分解和编码为嵌入功能,然后将这些要素合并起来,重新组合输入输入图像本身。第二,以对抗的方式学习了三个相应的绘图功能,以便分别将高斯的声音映射到学习的嵌入功能空间。我们可以利用拟议的框架,操纵输入图像的表面、背景和构成,并抽样新的嵌入功能,以产生这种有针对性的操纵,从而对生成过程进行更多的控制。在市场1501和深时尚数据集的实验表明,我们的模型不仅产生现实的人图像,而且能够将新的表面、背景和表层的图像映射到15号上。我们所生成的模型的模型可以显示的模型的相互之间。