In this paper, we propose a novel pipeline for the 3D reconstruction of the full body from egocentric viewpoints. 3-D reconstruction of the human body from egocentric viewpoints is a challenging task as the view is skewed and the body parts farther from the cameras are occluded. One such example is the view from cameras installed below VR headsets. To achieve this task, we first make use of conditional GANs to translate the egocentric views to full body third-person views. This increases the comprehensibility of the image and caters to occlusions. The generated third-person view is further sent through the 3D reconstruction module that generates a 3D mesh of the body. We also train a network that can take the third person full-body view of the subject and generate the texture maps for applying on the mesh. The generated mesh has fairly realistic body proportions and is fully rigged allowing for further applications such as real-time animation and pose transfer in games. This approach can be key to a new domain of mobile human telepresence.
翻译:在本文中,我们从自我中心的角度提出三维重建整个身体的新管道。 三维从自我中心的角度对人体进行重建是一项艰巨的任务,因为视觉被扭曲,离摄像机更远的肢体部分被隐蔽。一个例子就是VR头盔下安装的相机的视图。为了完成这项任务,我们首先使用有条件的GANs将自我中心的观点转换为完整的第三人的观点。这增加了图像的可理解性,满足了隐蔽性。生成的第三人的观点进一步通过3D重建模块发送,该模块生成了3D网格,生成了3D网格。我们还培训了一个网络,可以让第三人全方位观看该主题,并生成了用于网格的纹图示图。生成的网格具有相当现实的体积比例,并完全调整了进一步的应用,例如实时动画和游戏中进行传输。这个方法可以成为人类移动远程的新领域的关键。