We present Neural Generalized Implicit Functions(Neural-GIF), to animate people in clothing as a function of the body pose. Given a sequence of scans of a subject in various poses, we learn to animate the character for new poses. Existing methods have relied on template-based representations of the human body (or clothing). However such models usually have fixed and limited resolutions, require difficult data pre-processing steps and cannot be used with complex clothing. We draw inspiration from template-based methods, which factorize motion into articulation and non-rigid deformation, but generalize this concept for implicit shape learning to obtain a more flexible model. We learn to map every point in the space to a canonical space, where a learned deformation field is applied to model non-rigid effects, before evaluating the signed distance field. Our formulation allows the learning of complex and non-rigid deformations of clothing and soft tissue, without computing a template registration as it is common with current approaches. Neural-GIF can be trained on raw 3D scans and reconstructs detailed complex surface geometry and deformations. Moreover, the model can generalize to new poses. We evaluate our method on a variety of characters from different public datasets in diverse clothing styles and show significant improvements over baseline methods, quantitatively and qualitatively. We also extend our model to multiple shape setting. To stimulate further research, we will make the model, code and data publicly available at: https://virtualhumans.mpi-inf.mpg.de/neuralgif/
翻译:我们向身着服装的人提供神经通用隐性功能(Neural-GIF),作为身体的功能,我们向着衣着的人提供感官功能(Neural-GIF),但将这一概念概括为隐含形状学习以获得更灵活的模型。我们学习将空间的每一点绘制成一个可变空间,在评估已签字的距离字段之前,将一个已学的变形字段用于模型的非硬性效果。现有方法依靠人体(或服装)的模板表示方式。然而,这些模型通常有固定和有限的分辨率,需要难于预处理的步骤,不能用于复杂的服装和软组织。我们从基于模板的方法中汲取灵感,这些方法将运动转化为表达和非硬性变形,但将这一概念推广到隐含的形状学习,以获得一个更灵活的模型。我们学会了将空间中的每一个点都绘制成形图,在评估非硬性效果模型前,在评估非硬性效果前,我们的模型将展示一个显著的造型模型到新的格式。我们用新的格式来评估我们的数据。我们用不同的标准来进行新的和定性的变式的变形。我们用新的格式来评估。