Capturing and rendering life-like hair is particularly challenging due to its fine geometric structure, the complex physical interaction and its non-trivial visual appearance.Yet, hair is a critical component for believable avatars. In this paper, we address the aforementioned problems: 1) we use a novel, volumetric hair representation that is com-posed of thousands of primitives. Each primitive can be rendered efficiently, yet realistically, by building on the latest advances in neural rendering. 2) To have a reliable control signal, we present a novel way of tracking hair on the strand level. To keep the computational effort manageable, we use guide hairs and classic techniques to expand those into a dense hood of hair. 3) To better enforce temporal consistency and generalization ability of our model, we further optimize the 3D scene flow of our representation with multi-view optical flow, using volumetric ray marching. Our method can not only create realistic renders of recorded multi-view sequences, but also create renderings for new hair configurations by providing new control signals. We compare our method with existing work on viewpoint synthesis and drivable animation and achieve state-of-the-art results. Please check out our project website at https://ziyanw1.github.io/hvh/.
翻译:由于精细的几何结构、复杂的物理互动及其非三角直观外观外观的外观,获取和制造生命型头发尤其具有挑战性。 头发是令人相信的假发的一个关键组成部分。 在本文中,我们处理上述问题:(1) 我们使用由数千种原始生物组成的新颖的量体型毛发表,用体积光学运动,使每个原始生物都能够有效地、现实地成为现实,在神经结构的最新进步的基础上更进一步。(2) 有了可靠的控制信号,我们展示了一种新颖的跟踪线层头发的方法。为了保持计算努力的可控性,我们使用导发型和经典技术将头发扩展成密集的发型。(3) 为了更好地增强我们模型的时间一致性和一般化能力,我们进一步优化我们以多视角光学流代表的3D场景流。 我们的方法不仅能够创造记录到的多视角序列的现实组合,而且能够通过提供新的控制信号为新的发型配置。 我们将我们的方法与现有的观点合成和可流动动动动动动画/ 实现我们的网站的状态。