The capture and animation of human hair are two of the major challenges in the creation of realistic avatars for the virtual reality. Both problems are highly challenging, because hair has complex geometry and appearance, as well as exhibits challenging motion. In this paper, we present a two-stage approach that models hair independently from the head to address these challenges in a data-driven manner. The first stage, state compression, learns a low-dimensional latent space of 3D hair states containing motion and appearance, via a novel autoencoder-as-a-tracker strategy. To better disentangle the hair and head in appearance learning, we employ multi-view hair segmentation masks in combination with a differentiable volumetric renderer. The second stage learns a novel hair dynamics model that performs temporal hair transfer based on the discovered latent codes. To enforce higher stability while driving our dynamics model, we employ the 3D point-cloud autoencoder from the compression stage for de-noising of the hair state. Our model outperforms the state of the art in novel view synthesis and is capable of creating novel hair animations without having to rely on hair observations as a driving signal. Project page is here https://ziyanw1.github.io/neuwigs/.
翻译:人类毛发的捕捉和动动画是创造现实虚拟现实现实中现实的Avatars的两大挑战。 这两种问题都具有高度挑战性, 因为头发有复杂的几何和外观, 以及具有挑战性的运动。 在本文中, 我们展示了一种两阶段方法, 将毛发与头部分开, 以数据驱动的方式, 来应对这些挑战。 第一阶段, 状态压缩, 学习3D 毛发的低维潜在空间, 包含运动和外观。 为了更好地将头发和头部在外观学习中进行分解, 我们使用多视图的毛发面遮罩, 并使用不同的体积质造型。 第二阶段学习一种新颖的发型动态模型, 根据发现的潜在代码进行时间性发型转移。 为了在驱动我们的动态模型时加强稳定性, 我们从压缩阶段使用3D点- cloud 自动显像空间, 以解发状态。 我们的模型超越了艺术在新视觉合成中的状态, 并且能够制作新的头发动动动动画, 而不必依赖头发/ MAswabs 。 。 AL/ simubs