We present HandAvatar, a novel representation for hand animation and rendering, which can generate smoothly compositional geometry and self-occlusion-aware texture. Specifically, we first develop a MANO-HD model as a high-resolution mesh topology to fit personalized hand shapes. Sequentially, we decompose hand geometry into per-bone rigid parts, and then re-compose paired geometry encodings to derive an across-part consistent occupancy field. As for texture modeling, we propose a self-occlusion-aware shading field (SelF). In SelF, drivable anchors are paved on the MANO-HD surface to record albedo information under a wide variety of hand poses. Moreover, directed soft occupancy is designed to describe the ray-to-surface relation, which is leveraged to generate an illumination field for the disentanglement of pose-independent albedo and pose-dependent illumination. Trained from monocular video data, our HandAvatar can perform free-pose hand animation and rendering while at the same time achieving superior appearance fidelity. We also demonstrate that HandAvatar provides a route for hand appearance editing. Project website: https://seanchenxy.github.io/HandAvatarWeb.
翻译:我们提出了一种新型手部动画和渲染表示,名为HandAvatar,它可以生成平滑的组合几何体和考虑自遮挡的纹理。具体而言,我们首先开发了一个高分辨率的网格拓扑MANO-HD模型,用于适应个人手形。接着,我们将手部几何体分解为每个骨骼的刚性部分,然后重新组合配对的几何编码,以得出跨部分一致的占据场。对于纹理建模,我们提出了一种自遮挡感知着色场(SelF)。在SelF中,驱动锚被铺设在MANO-HD表面上,以记录各种手部姿势下的反照率信息。此外,设计了有向软占据来描述光线与表面的关系,用于生成用于分解与姿势无关的反照率和与姿势相关的照明的照明场。通过从单目视频数据进行训练,我们的HandAvatar可以执行自由姿态的手部动画和渲染,同时实现优于其他方法的外观保真度。我们还证明,HandAvatar为手部外观编辑提供了一条途径。项目网址:https://seanchenxy.github.io/HandAvatarWeb。