We propose a new approach to human clothing modeling based on point clouds. Within this approach, we learn a deep model that can predict point clouds of various outfits, for various human poses and for various human body shapes. Notably, outfits of various types and topologies can be handled by the same model. Using the learned model, we can infer geometry of new outfits from as little as a singe image, and perform outfit retargeting to new bodies in new poses. We complement our geometric model with appearance modeling that uses the point cloud geometry as a geometric scaffolding, and employs neural point-based graphics to capture outfit appearance from videos and to re-render the captured outfits. We validate both geometric modeling and appearance modeling aspects of the proposed approach against recently proposed methods, and establish the viability of point-based clothing modeling.
翻译:我们提出了基于点云的人类服装建模新方法。 在这种方法中,我们学习了一个深度模型,可以预测各种服饰、各种人姿势和各种人体形状的云层。 值得注意的是,各种类型和地形的装饰可以由同一模型处理。 我们可以使用所学的模型,将新服饰的几何学从小到小到小推线图像,并用新装饰重新瞄准新体。 我们用外观模型来补充我们的几何模型,将点云几何学用作几何架,并使用神经点图形从视频中捕捉服装的外观,重新塑造所捕捉的装饰。 我们用最近提出的方法验证了拟议服饰方法的几何建模和外观建模两个方面,并确定了点制服装建模的可行性。