Given an image of a target person and an image of another person wearing a garment, we automatically generate the target person in the given garment. At the core of our method is a pose-conditioned StyleGAN2 latent space interpolation, which seamlessly combines the areas of interest from each image, i.e., body shape, hair, and skin color are derived from the target person, while the garment with its folds, material properties, and shape comes from the garment image. By automatically optimizing for interpolation coefficients per layer in the latent space, we can perform a seamless, yet true to source, merging of the garment and target person. Our algorithm allows for garments to deform according to the given body shape, while preserving pattern and material details. Experiments demonstrate state-of-the-art photo-realistic results at high resolution ($512\times 512$).
翻译:根据目标人物的图像和身着服装的其他人的图像,我们自动在给定服装中生成目标人物。我们的方法核心是“装配条件的StyleGAN2”潜在空间内插,它无缝地结合了每个图像感兴趣的领域,即身体形状、头发和肤色,而服装的折叠、物质属性和形状来自服装图像。通过自动优化潜居空间每个层层的内插系数,我们可以实现无缝的,但真实的源头,将服装和目标人合并在一起。我们的算法允许服装按照给定身体形状变形,同时保存模式和材料细节。实验显示高分辨率的现代光现实效果(512\time 512$512美元)。