In this paper we introduce SMPLicit, a novel generative model to jointly represent body pose, shape and clothing geometry. In contrast to existing learning-based approaches that require training specific models for each type of garment, SMPLicit can represent in a unified manner different garment topologies (e.g. from sleeveless tops to hoodies and to open jackets), while controlling other properties like the garment size or tightness/looseness. We show our model to be applicable to a large variety of garments including T-shirts, hoodies, jackets, shorts, pants, skirts, shoes and even hair. The representation flexibility of SMPLicit builds upon an implicit model conditioned with the SMPL human body parameters and a learnable latent space which is semantically interpretable and aligned with the clothing attributes. The proposed model is fully differentiable, allowing for its use into larger end-to-end trainable systems. In the experimental section, we demonstrate SMPLicit can be readily used for fitting 3D scans and for 3D reconstruction in images of dressed people. In both cases we are able to go beyond state of the art, by retrieving complex garment geometries, handling situations with multiple clothing layers and providing a tool for easy outfit editing. To stimulate further research in this direction, we will make our code and model publicly available at http://www.iri.upc.edu/people/ecorona/smplicit/.
翻译:在本文中,我们引入了SMPLicit, 这是一种新型的基因模型, 以共同代表体型、 形状和服装几何。 与现有的学习方法, 要求为每类服装培训特定模型, 相比, SMPLicit 能够以统一的方式代表不同的服装地形( 从无袖顶部到帽衫和开衣衫), 同时控制其他属性, 如服装尺寸或紧身/紧身/穿衣等。 我们展示了我们的模型, 适用于大量服装, 包括T恤衫、 帽衫、 夹克、 短裤、 裤子、 裙子、 鞋 甚至毛发。 SMPLicit 的代表灵活性基于一个隐含的模型, 以SMPL 人体参数为条件, 以及一个可学习的隐性隐性隐性隐性空间为基础。 拟议的模型完全不同, 允许将其用于更大的最终到可训练的系统。 在实验部分, 我们展示了 SMPLicit可以很容易地用于3D 扫描和 3D 重新制作穿衣的人的图像。 在复杂的版本中, 我们能够进行多种的服装和结构的版本的版本中,, 将使得我们能够进行多样化的版本的版本的版本的 。 在复杂的版本中, 我们的版本的版本的版本的版本的版本的版本的版本的版本的版本的版本的版本的版本的版本的版本的版本的版本的版本可以进行。