Recent research works have focused on generating human models and garments from their 2D images. However, state-of-the-art researches focus either on only a single layer of the garment on a human model or on generating multiple garment layers without any guarantee of the intersection-free geometric relationship between them. In reality, people wear multiple layers of garments in their daily life, where an inner layer of garment could be partially covered by an outer one. In this paper, we try to address this multi-layer modeling problem and propose the Layered-Garment Net (LGN) that is capable of generating intersection-free multiple layers of garments defined by implicit function fields over the body surface, given the person's near front-view image. With a special design of garment indication fields (GIF), we can enforce an implicit covering relationship between the signed distance fields (SDF) of different layers to avoid self-intersections among different garment surfaces and the human body. Experiments demonstrate the strength of our proposed LGN framework in generating multi-layer garments as compared to state-of-the-art methods. To the best of our knowledge, LGN is the first research work to generate intersection-free multiple layers of garments on the human body from a single image.
翻译:最近的研究工作侧重于从2D图像中生成人造模型和服装。然而,目前最先进的研究只侧重于人类模型中的单层服装,或者侧重于制造多层服装,而不保证它们之间的交叉无几几何关系。在现实生活中,人们在日常生活中穿着多层服装,其中服装的内层部分可以由外层部分覆盖。在本文中,我们试图解决这一多层建模问题,并提出层层加层建模网(LGN),它能够产生由人体表面的隐含功能领域界定的跨层多层服装,而这种服装则是由人体表面的隐含功能领域所定义的,鉴于该人的近前视图像。有了服装标识领域的特殊设计(GIF),我们可以在不同的层的签字距离域(SDF)之间执行隐含层的服装关系,以避免不同服装表面和人体之间的自我交织。实验表明我们提议的LGN框架在生成多层服装方面的力量,与最新方法相比。从我们的知识层中得出的最先进的是,从人类的多层中生成的单一图像。