Virtual try-on is a promising computer vision topic with a high commercial value wherein a new garment is visually worn on a person with a photo-realistic effect. Previous studies conduct their shape and content inference at one stage, employing a single-scale warping mechanism and a relatively unsophisticated content inference mechanism. These approaches have led to suboptimal results in terms of garment warping and skin reservation under challenging try-on scenarios. To address these limitations, we propose a novel virtual try-on method via progressive inference paradigm (PGVTON) that leverages a top-down inference pipeline and a general garment try-on strategy. Specifically, we propose a robust try-on parsing inference method by disentangling semantic categories and introducing consistency. Exploiting the try-on parsing as the shape guidance, we implement the garment try-on via warping-mapping-composition. To facilitate adaptation to a wide range of try-on scenarios, we adopt a covering more and selecting one warping strategy and explicitly distinguish tasks based on alignment. Additionally, we regulate StyleGAN2 to implement re-naked skin inpainting, conditioned on the target skin shape and spatial-agnostic skin features. Experiments demonstrate that our method has state-of-the-art performance under two challenging scenarios. The code will be available at https://github.com/NerdFNY/PGVTON.
翻译:虚拟试穿是一种具有高商业价值的计算机视觉主题,可以在人体图像上实现衣服的视觉穿着效果。以往的研究在一个阶段进行了形状和内容推断,采用了单尺度变形机制和相对不成熟的内容推断机制。这些方法在具有挑战性的试穿场景下会导致衣服变形和皮肤保留方面的结果不够优秀。为了解决这些限制,我们提出了一种新的基于图像的虚拟试穿方法,采用了渐进推理范式(PGVTON),利用自顶向下的推理管道和普通的衣服试穿策略。具体而言,我们通过解离语义类别并引入一致性,提出了稳健的试穿解析推理方法。利用试穿分析作为形状指导,我们通过扭曲映射合成实现了衣服的试穿。为了方便地适应各种试穿场景,我们采用覆盖更多和选择一个的扭曲策略,并明确区分任务基于对齐。此外,我们用皮肤形状和空间不可知的皮肤特征为条件,调整StyleGAN2实现了裸露皮肤修补。实验表明,我们的方法在两种具有挑战性的情况下具有最先进的性能。代码可在 https://github.com/NerdFNY/PGVTON 上获得。