We present HandAvatar, a novel representation for hand animation and rendering, which can generate smoothly compositional geometry and self-occlusion-aware texture. Specifically, we first develop a MANO-HD model as a high-resolution mesh topology to fit personalized hand shapes. Sequentially, we decompose hand geometry into per-bone rigid parts, and then re-compose paired geometry encodings to derive an across-part consistent occupancy field. As for texture modeling, we propose a self-occlusion-aware shading field (SelF). In SelF, drivable anchors are paved on the MANO-HD surface to record albedo information under a wide variety of hand poses. Moreover, directed soft occupancy is designed to describe the ray-to-surface relation, which is leveraged to generate an illumination field for the disentanglement of pose-independent albedo and pose-dependent illumination. Trained from monocular video data, our HandAvatar can perform free-pose hand animation and rendering while at the same time achieving superior appearance fidelity. We also demonstrate that HandAvatar provides a route for hand appearance editing. Project website: https://seanchenxy.github.io/HandAvatarWeb.
翻译:我们提出手动动画和造型的新缩略语HandAvatar,这是手动动动画和造型的新缩略语,可以产生顺利的构成几何和自我封闭性自觉纹理(SelF)。具体地说,我们首先开发一个MANO-HD模型,作为高分辨率网格表层图象学模型,以适合个性化手形。因此,我们将手的几何图解分解成单骨硬部分,然后将配对的几何编码重新组合成一个交叉一致的占用场。关于纹理模型学,我们建议用单向视频数据来进行自我封闭性觉阴影场(SelF)。在SelF中,在MANO-HD表面铺设一个可驾驶的锚,以便记录高分辨率信息。此外,定向软占用的目的是描述射线到表面的关系,从而产生一个模糊的场域域,用以解容、自容、自容和容错。我们手动的图像数据来自单向视频数据,我们手动手动的锚锚可以进行自由的动动和显示,同时进行高级图像。AHHAL-HA/HA/HA计划也展示了高级外观。