Tracking and reconstructing the 3D pose and geometry of two hands in interaction is a challenging problem that has a high relevance for several human-computer interaction applications, including AR/VR, robotics, or sign language recognition. Existing works are either limited to simpler tracking settings (e.g., considering only a single hand or two spatially separated hands), or rely on less ubiquitous sensors, such as depth cameras. In contrast, in this work we present the first real-time method for motion capture of skeletal pose and 3D surface geometry of hands from a single RGB camera that explicitly considers close interactions. In order to address the inherent depth ambiguities in RGB data, we propose a novel multi-task CNN that regresses multiple complementary pieces of information, including segmentation, dense matchings to a 3D hand model, and 2D keypoint positions, together with newly proposed intra-hand relative depth and inter-hand distance maps. These predictions are subsequently used in a generative model fitting framework in order to estimate pose and shape parameters of a 3D hand model for both hands. We experimentally verify the individual components of our RGB two-hand tracking and 3D reconstruction pipeline through an extensive ablation study. Moreover, we demonstrate that our approach offers previously unseen two-hand tracking performance from RGB, and quantitatively and qualitatively outperforms existing RGB-based methods that were not explicitly designed for two-hand interactions. Moreover, our method even performs on-par with depth-based real-time methods.
翻译:跟踪和重新构建互动中的三维面貌和两只手的几何结构是一个具有挑战性的问题,它与包括AR/VR、机器人或手语识别在内的若干人体计算机互动应用高度相关。现有的工程要么局限于更简单的跟踪设置(例如只考虑一只手或两只空间分离的手),要么依赖不太普遍的传感器,例如深度摄像头。与此形成对照,我们在此工作中提出了第一个实时方法,用于运动捕获骨骼面和三维表面手的地表几何方法,从一个明确考虑密切互动的一RGB相机中提取出一个三维手的深度。为了解决RGB数据内在深度模糊的问题,我们提议了一个新型多任务CNN,它会倒退多种互补的信息,包括分解、与三维手模型的密集匹配,以及2D关键点位置,以及新提出的内部相对深度和相距距离地图。这些预测随后被用于一个基于基因化模型的准确框架,以便估算两只手的三维手模型的形状和形状。我们甚至没有实验性地核查了两部的交互式互动方法的个体互动,我们用两种方法来进行实地追踪,我们目前使用的RGB的实地和数量分析。