Neural radiance field using pixel-aligned features can render photo-realistic novel views. However, when pixel-aligned features are directly introduced to human avatar reconstruction, the rendering can only be conducted for still humans, rather than animatable avatars. In this paper, we propose AniPixel, a novel animatable and generalizable human avatar reconstruction method that leverages pixel-aligned features for body geometry prediction and RGB color blending. Technically, to align the canonical space with the target space and the observation space, we propose a bidirectional neural skinning field based on skeleton-driven deformation to establish the target-to-canonical and canonical-to-observation correspondences. Then, we disentangle the canonical body geometry into a normalized neutral-sized body and a subject-specific residual for better generalizability. As the geometry and appearance are closely related, we introduce pixel-aligned features to facilitate the body geometry prediction and detailed surface normals to reinforce the RGB color blending. Moreover, we devise a pose-dependent and view direction-related shading module to represent the local illumination variance. Experiments show that our AniPixel renders comparable novel views while delivering better novel pose animation results than state-of-the-art methods. The code will be released.
翻译:使用像素粘合特性的神经光亮场可以产生光现实的新观点。 然而, 当像素粘合特性直接引入人类炭白重建时, 我们只能对死尸进行分解, 而只能对死尸进行分解。 在本文中, 我们提出“ AniPixel ”, 这是一种新颖的可想象和可概括的人类异形重建方法, 利用像素粘合特性进行身体几何预测和 RGB 颜色混合。 从技术上讲, 为了将星光空间与目标空间和观测空间相匹配, 我们提议以骨骼驱动的变形为基础, 双向神经皮层剥离场, 以建立目标对卡纳和卡星对观察的对等。 然后, 我们把卡纳氏体的几何测量法分解成一个统一的中性体形体, 以及一个特定主题的剩余部分, 以更好地概括性。 由于几何测量和外观是密切相关的, 我们引入了像质调的特性, 来强化 RGB 颜色混合的细貌。 此外, 我们设计了一个比较的进化模型, 将展示一个与本地的进化模型 。