Recent approaches to drape garments quickly over arbitrary human bodies leverage self-supervision to eliminate the need for large training sets. However, they are designed to train one network per clothing item, which severely limits their generalization abilities. In our work, we rely on self-supervision to train a single network to drape multiple garments. This is achieved by predicting a 3D deformation field conditioned on the latent codes of a generative network, which models garments as unsigned distance fields. Our pipeline can generate and drape previously unseen garments of any topology, whose shape can be edited by manipulating their latent codes. Being fully differentiable, our formulation makes it possible to recover accurate 3D models of garments from partial observations -- images or 3D scans -- via gradient descent. Our code will be made publicly available.
翻译:最近在任意人体身上迅速穿成衣的做法利用自我监督手段,消除了大型培训设备的需求。然而,这些设计旨在为每件服装培训一个网络,这严重限制了它们的概括能力。在我们的工作中,我们依靠自我监督来培训一个单一网络来设计多件服装。这是通过预测一个3D变形场实现的,它以基因化网络的潜伏代码为条件,而基因化网络的外衣是没有签名的距离字段。我们的管道可以生成和布置任何先前看不见的地形服装,其形状可以通过操纵其潜在代码加以编辑。由于完全不同,我们的配方使得有可能通过梯度下降从部分观察 -- -- 图像或3D扫描 -- -- 中恢复准确的3D服装模式。我们的代码将公开提供。