Virtual try-on is a promising application of computer graphics and human computer interaction that can have a profound real-world impact especially during this pandemic. Existing image-based works try to synthesize a try-on image from a single image of a target garment, but it inherently limits the ability to react to possible interactions. It is difficult to reproduce the change of wrinkles caused by pose and body size change, as well as pulling and stretching of the garment by hand. In this paper, we propose an alternative per garment capture and synthesis workflow to handle such rich interactions by training the model with many systematically captured images. Our workflow is composed of two parts: garment capturing and clothed person image synthesis. We designed an actuated mannequin and an efficient capturing process that collects the detailed deformations of the target garments under diverse body sizes and poses. Furthermore, we proposed to use a custom-designed measurement garment, and we captured paired images of the measurement garment and the target garments. We then learn a mapping between the measurement garment and the target garments using deep image-to-image translation. The customer can then try on the target garments interactively during online shopping.
翻译:虚拟试演是一种很有希望的计算机图形和人类计算机互动应用,这种应用可以产生深刻的现实世界影响,特别是在这一大流行病期间。现有的图像工作试图从一个目标服装的单一图像中合成一个试镜图像,但它本身限制了对可能的相互作用作出反应的能力。很难复制由面貌和体型变化以及手工拉拉和拉长服装而导致的皱纹变化。在本文中,我们提议了一种替代的每件服装捕获和合成工作流程,通过用许多系统捕获的图像对模型进行培训来处理这种丰富的互动。我们的工作流程由两部分组成:服装捕获和穿衣人图像合成。我们设计了一个活化的曼字和一个高效的捕捉取过程,收集不同体型和姿势下的目标服装的详细变形。此外,我们提议使用定制的计量服装和目标服装的配对图像。我们然后用深图像到图像翻译来学习测量服装与目标服装之间的图画。然后客户可以在网上购物时尝试对目标服装进行互动。