Garment representation, editing and animation are challenging topics in the area of computer vision and graphics. It remains difficult for existing garment representations to achieve smooth and plausible transitions between different shapes and topologies. In this work, we introduce, DeepCloth, a unified framework for garment representation, reconstruction, animation and editing. Our unified framework contains 3 components: First, we represent the garment geometry with a "topology-aware UV-position map", which allows for the unified description of various garments with different shapes and topologies by introducing an additional topology-aware UV-mask for the UV-position map. Second, to further enable garment reconstruction and editing, we contribute a method to embed the UV-based representations into a continuous feature space, which enables garment shape reconstruction and editing by optimization and control in the latent space, respectively. Finally, we propose a garment animation method by unifying our neural garment representation with body shape and pose, which achieves plausible garment animation results leveraging the dynamic information encoded by our shape and style representation, even under drastic garment editing operations. To conclude, with DeepCloth, we move a step forward in establishing a more flexible and general 3D garment digitization framework. Experiments demonstrate that our method can achieve state-of-the-art garment representation performance compared with previous methods.
翻译:在计算机视觉和图形领域,服装代表、编辑和动画是具有挑战性的主题。对于现有的服装代表而言,仍然难以在不同的形状和地形之间实现平稳和可信的过渡。在这项工作中,我们引入了“DeepCloth”,一个服装代表、重建、动画和编辑的统一框架。我们的统一框架包含3个组成部分:首先,我们用“具有地形意识的紫外线定位图”代表服装几何结构,通过为紫外线定位地图引入更多具有表层意识的紫外线图像来统一描述不同形状和地形的各种服装。第二,为了进一步实现服装重建和编辑,我们贡献了一种方法,将基于紫外线的表达方式嵌入一个连续的特征空间,使服装的重建和编辑能够分别以优化和控制的形式在潜在空间进行。最后,我们提出了一种服装动画方法,通过将我们的神经服装代表形式和面部代表形式统一起来,从而实现合理的服装动画效果,利用我们以其形状和风格代表形式编码的动态信息,甚至处于急剧的服装编辑操作之下。最后,我们用深色的服装代表方式将一个步骤与以往的模型相比较,我们可以展示一个比较灵活地展示一个进步的服装结构框架。