Studies of virtual try-on (VITON) have been shown their effectiveness in utilizing the generative neural network for virtually exploring fashion products, and some of recent researches of VITON attempted to synthesize human image wearing given multiple types of garments (e.g., top and bottom clothes). However, when replacing the top and bottom clothes of the target human, numerous wearing styles are possible with a certain combination of the clothes. In this paper, we address the problem of variation in wearing style when simultaneously replacing the top and bottom clothes of the model. We introduce Wearing-Guide VITON (i.e., WG-VITON) which utilizes an additional input binary mask to control the wearing styles of the generated image. Our experiments show that WG-VITON effectively generates an image of the model wearing given top and bottom clothes, and create complicated wearing styles such as partly tucking in the top to the bottom
翻译:对虚拟试穿(VITON)的研究显示,虚拟试穿(VITON)在利用基因神经网络以实际探索时装产品时,是有效的,最近对VITON的一些研究显示,它们利用基因神经神经网络来实际探索时装产品,并试图合成身着多种服装(例如上衣和底底衣)的人类图像。然而,在更换目标人的上衣和底衣时,可以用某种服装组合来取代许多穿戴方式。在本文中,我们探讨了同时替换模型上衣和底衣时穿式变化的问题。我们引入了Wearing-Guide VITON(即WG-VITON),利用额外的输入二维面罩来控制所生成图像的穿戴风格。我们的实验显示,WG-VITON有效地生成了身着上衣和底衣的模型图像,并制造了复杂的穿衣风格,例如部分在顶到底部等。