We propose a new generative model for 3D garment deformations that enables us to learn, for the first time, a data-driven method for virtual try-on that effectively addresses garment-body collisions. In contrast to existing methods that require an undesirable postprocessing step to fix garment-body interpenetrations at test time, our approach directly outputs 3D garment configurations that do not collide with the underlying body. Key to our success is a new canonical space for garments that removes pose-and-shape deformations already captured by a new diffused human body model, which extrapolates body surface properties such as skinning weights and blendshapes to any 3D point. We leverage this representation to train a generative model with a novel self-supervised collision term that learns to reliably solve garment-body interpenetrations. We extensively evaluate and compare our results with recently proposed data-driven methods, and show that our method is the first to successfully address garment-body contact in unseen body shapes and motions, without compromising realism and detail.
翻译:我们为3D服装变形提出了一个新的基因模型,让我们首次能够学习一种数据驱动的虚拟试演方法,以有效解决服装与身体碰撞问题。与目前需要一种不可取的后处理步骤来修正测试时间的服装与身体间穿透术的方法相比,我们的方法直接产出了与基本体不发生碰撞的3D服装结构。我们成功的关键是一个新的3D服装变形模式,这种服装可以消除已经通过新的分散人体模型所捕捉的成形与成形变形,这种模式将体表层的特性外推到任何3D点。我们利用这个代表来训练一种基因化模型,用一种新的自我监督的碰撞术语来学习可靠地解决服装与身体间穿透术。我们广泛评估和比较我们的成果与最近提出的数据驱动方法,并表明我们的方法是首先成功地解决在无形身体形状和运动中与服装与身体的接触,同时不损害真实性和细节。