We present Neural Strands, a novel learning framework for modeling accurate hair geometry and appearance from multi-view image inputs. The learned hair model can be rendered in real-time from any viewpoint with high-fidelity view-dependent effects. Our model achieves intuitive shape and style control unlike volumetric counterparts. To enable these properties, we propose a novel hair representation based on a neural scalp texture that encodes the geometry and appearance of individual strands at each texel location. Furthermore, we introduce a novel neural rendering framework based on rasterization of the learned hair strands. Our neural rendering is strand-accurate and anti-aliased, making the rendering view-consistent and photorealistic. Combining appearance with a multi-view geometric prior, we enable, for the first time, the joint learning of appearance and explicit hair geometry from a multi-view setup. We demonstrate the efficacy of our approach in terms of fidelity and efficiency for various hairstyles.
翻译:我们展示了神经字strands, 用于模拟准确的头发几何和多视图图像输入的外观的新学习框架。 学习的发型模型可以从任何观点中实时生成, 具有高度不忠的视觉依赖效应。 我们的模式可以实现直观的形状和风格控制, 与体积对应方不同。 为了能够实现这些属性, 我们提议了一个基于神经头部纹理的新型发型表达法, 将每个特克塞尔位置的个体线的几何和外观编码起来。 此外, 我们引入了一个新的神经化框架, 其基础是了解的发束的分光化。 我们的神经结构是直线和反变形的, 使造型与视觉一致, 使外观和多视角之前的几何学相结合, 我们第一次能够从多视角设置中共同学习外观和清晰的毛发几何。 我们展示了我们方法在对各种发型的忠诚和效率方面的有效性。