Reconstructing a high-precision and high-fidelity 3D human hand from a color image plays a central role in replicating a realistic virtual hand in human-computer interaction and virtual reality applications. The results of current methods are lacking in accuracy and fidelity due to various hand poses and severe occlusions. In this study, we propose an I2UV-HandNet model for accurate hand pose and shape estimation as well as 3D hand super-resolution reconstruction. Specifically, we present the first UV-based 3D hand shape representation. To recover a 3D hand mesh from an RGB image, we design an AffineNet to predict a UV position map from the input in an image-to-image translation fashion. To obtain a higher fidelity shape, we exploit an additional SRNet to transform the low-resolution UV map outputted by AffineNet into a high-resolution one. For the first time, we demonstrate the characterization capability of the UV-based hand shape representation. Our experiments show that the proposed method achieves state-of-the-art performance on several challenging benchmarks.
翻译:从彩色图像中重建高精度和高度忠诚的 3D 人手,在复制人-计算机互动和虚拟现实应用中现实虚拟手方面发挥着核心作用。由于手姿势和严重隔绝,目前方法的结果缺乏准确性和真实性。在本研究中,我们提议了一个I2UV-HandNet模型,用于精确的手姿势和形状估计以及3D 手超分辨率重建。具体地说,我们展示了第一个基于紫外线的3D 手形表示法。为了从 RGB 图像中恢复一个 3D 手形表示法,我们设计了一个 Affine Net 来预测从图像到图像翻译时输入的紫外线位置图。为了获得更高的忠诚性形状,我们利用一个额外的SRNet,将AffineNet 输出的低分辨率的UV 地图转换为高分辨率的地图。我们第一次展示了基于紫外线的3D 手形状表示法的描述能力。我们的实验显示,拟议的方法在若干具有挑战性基准上达到了状态。