3D posture estimation is important in analyzing and improving ergonomics in physical human-robot interaction and reducing the risk of musculoskeletal disorders. Vision-based posture estimation approaches are prone to sensor and model errors, as well as occlusion, while posture estimation solely from the interacting robot's trajectory suffers from ambiguous solutions. To benefit from the advantages of both approaches and improve upon their drawbacks, we introduce a low-cost, non-intrusive, and occlusion-robust multi-sensory 3D postural estimation algorithm in physical human-robot interaction. We use 2D postures from OpenPose over a single camera, and the trajectory of the interacting robot while the human performs a task. We model the problem as a partially-observable dynamical system and we infer the 3D posture via a particle filter. We present our work in teleoperation, but it can be generalized to other applications of physical human-robot interaction. We show that our multi-sensory system resolves human kinematic redundancy better than posture estimation solely using OpenPose or posture estimation solely using the robot's trajectory. This will increase the accuracy of estimated postures compared to the gold-standard motion capture postures. Moreover, our approach also performs better than other single sensory methods when postural assessment using RULA assessment tool.
翻译:3D 姿态估计对于分析和改善人体-机器人物理互动中的人体-机器人物理互动以及减少肌肉-机器人失常风险十分重要。基于愿景的姿态估计方法容易发生感应和模型错误,以及隐蔽作用,而仅仅从互动机器人的轨迹中得出的姿态估计则容易产生模糊的解决办法。为了从这两种方法的优势中获益,并随着其缺陷的改善,我们引入了低成本、非侵扰性、隐蔽性-机器人多感官3D 物理-机器人互动的外观估计算法。我们使用OpenPose的2D姿势对单一的相机,以及互动机器人在人类执行任务时的轨迹。我们将问题作为半可观测的动态系统进行模拟,我们通过粒子过滤器推断3D的态势。我们介绍我们的远程合作工作,但可以推广到人体-机器人物理互动的其他应用。我们显示,我们的多感官系统比仅使用Opose或态势评估的姿态评估方法更能解决人类的近似性冗余性冗余问题,而不是仅仅使用OPe-Pe 或动态机器人运动后姿势估测测测的单一的姿态。我们的标准动作,我们只能用Sirmodrobrobs 的动作评估方法来评估。