We present a method for reconstructing accurate and consistent 3D hands from a monocular video. We observe that detected 2D hand keypoints and the image texture provide important cues about the geometry and texture of the 3D hand, which can reduce or even eliminate the requirement on 3D hand annotation. Thus we propose ${\rm {S}^{2}HAND}$, a self-supervised 3D hand reconstruction model, that can jointly estimate pose, shape, texture, and the camera viewpoint from a single RGB input through the supervision of easily accessible 2D detected keypoints. We leverage the continuous hand motion information contained in the unlabeled video data and propose ${\rm {S}^{2}HAND(V)}$, which uses a set of weights shared ${\rm {S}^{2}HAND}$ to process each frame and exploits additional motion, texture, and shape consistency constrains to promote more accurate hand poses and more consistent shapes and textures. Experiments on benchmark datasets demonstrate that our self-supervised approach produces comparable hand reconstruction performance compared with the recent full-supervised methods in single-frame as input setup, and notably improves the reconstruction accuracy and consistency when using video training data.
翻译:我们提出了一个从单视视频中重建准确和一致的三维手的方法。我们观察到,检测到的二维手键点和图像纹理提供了关于三维手的几何和纹理的重要提示,可以减少甚至取消三维手注解的要求。因此,我们建议用一套共享的重量来处理每个框架,利用额外的动作、纹理和一致性限制,以促进更准确的手姿势和更加一致的形状和纹理。在基准数据集上进行的实验表明,我们自我监督的手动方法产生了可比较的手动性能,与最新的单一数据一致性相比,与最新的单一数据精确度相比,在改进了整个图像的精确性能。