We introduce (HPS) Human POSEitioning System, a method to recover the full 3D pose of a human registered with a 3D scan of the surrounding environment using wearable sensors. Using IMUs attached at the body limbs and a head mounted camera looking outwards, HPS fuses camera based self-localization with IMU-based human body tracking. The former provides drift-free but noisy position and orientation estimates while the latter is accurate in the short-term but subject to drift over longer periods of time. We show that our optimization-based integration exploits the benefits of the two, resulting in pose accuracy free of drift. Furthermore, we integrate 3D scene constraints into our optimization, such as foot contact with the ground, resulting in physically plausible motion. HPS complements more common third-person-based 3D pose estimation methods. It allows capturing larger recording volumes and longer periods of motion, and could be used for VR/AR applications where humans interact with the scene without requiring direct line of sight with an external camera, or to train agents that navigate and interact with the environment based on first-person visual input, like real humans. With HPS, we recorded a dataset of humans interacting with large 3D scenes (300-1000 sq.m) consisting of 7 subjects and more than 3 hours of diverse motion. The dataset, code and video will be available on the project page: http://virtualhumans.mpi-inf.mpg.de/hps/ .
翻译:我们引入了(HPS)人体潜伏系统(HPS) 人体潜伏系统(HPS),这是一种利用磨损感应器,用三维扫描环绕环境的3D人完整成形的方法。我们使用身体四肢上附着的三维图像仪和头架照相机向外看,HPS 引信相机以IMU为主的人体跟踪自定位为主。前者提供不漂浮但噪音的位置和定向估计,而后者在短期内是准确的,但在较长的时间内会漂移。我们显示,我们基于优化的整合利用了两者的惠益,从而可以使周围环境具有不漂移的准确性。此外,我们将三维的场景限制纳入我们的优化,例如与地面的脚接触,从而产生体貌合理的运动。HPS补充了更常见的第三人基3D的3D显示法方法。它可以捕捉更多的记录量和更长的动作时间,并可用于VR/AR应用程序,使人类与场景发生互动而无需与外部摄像头直接直线,或者训练以第一人视觉输入环境进行导航和互动的代理人,例如实际人行/D-300的图像。我们记录了一个大型的3-300项目将比大型的3-30的图像。我们记录了一个大型的3-30的模型的3-30的图像。我们将记录了一个大型数据。