Image virtual try-on task has abundant applications and has become a hot research topic recently. Existing 2D image-based virtual try-on methods aim to transfer a target clothing image onto a reference person, which has two main disadvantages: cannot control the size and length precisely; unable to accurately estimate the user's figure in the case of users wearing thick clothes, resulting in inaccurate dressing effect. In this paper, we put forward an akin task that aims to dress clothing for underwear models. %, which is also an urgent need in e-commerce scenarios. To solve the above drawbacks, we propose a Shape Controllable Virtual Try-On Network (SC-VTON), where a graph attention network integrates the information of model and clothing to generate the warped clothing image. In addition, the control points are incorporated into SC-VTON for the desired clothing shape. Furthermore, by adding a Splitting Network and a Synthesis Network, we can use clothing/model pair data to help optimize the deformation module and generalize the task to the typical virtual try-on task. Extensive experiments show that the proposed method can achieve accurate shape control. Meanwhile, compared with other methods, our method can generate high-resolution results with detailed textures.
翻译:虚拟图像试验任务具有丰富的应用程序,并且最近已成为一个热题研究课题。 现有的基于 2D 图像的虚拟试验方法旨在将目标服装图像传输给一个参考人,这有两个主要缺点:无法准确控制尺寸和长度;无法准确估计用户身穿厚衣、导致衣着不准确的衣着效果的用户数字。 在本文中,我们提出了一个类似任务,目的是为内衣模型穿戴衣服。%,这也是电子商务情景中的一项迫切需要。为了解决上述缺陷,我们建议了一个可显示控制虚拟试镜网络(SC-VTON),在这个网络中,图形关注网络将模型和服装信息整合起来,以生成扭曲的服装图像。此外,控制点被纳入SC-VTON,以达到理想的服装形状。此外,通过添加分割网络和合成网络,我们可以使用服装/模对数据来帮助优化变形模块,并将任务概括到典型的虚拟试样任务。 广泛的实验显示,拟议的方法可以实现准确的形状控制。 同时,与其他详细的方法相比,我们的方法可以产生高分辨率。