Motion capture (mocap) and time-of-flight based sensing of human actions are becoming increasingly popular modalities to perform robust activity analysis. Applications range from action recognition to quantifying movement quality for health applications. While marker-less motion capture has made great progress, in critical applications such as healthcare, marker-based systems, especially active markers, are still considered gold-standard. However, there are several practical challenges in both modalities such as visibility, tracking errors, and simply the need to keep marker setup convenient wherein movements are recorded with a reduced marker-set. This implies that certain joint locations will not even be marked-up, making downstream analysis of full body movement challenging. To address this gap, we first pose the problem of reconstructing the unmarked joint data as an ill-posed linear inverse problem. We recover missing joints for a given action by projecting it onto the manifold of human actions, this is achieved by optimizing the latent space representation of a deep autoencoder. Experiments on both mocap and Kinect datasets clearly demonstrate that the proposed method performs very well in recovering semantics of the actions and dynamics of missing joints. We will release all the code and models publicly.
翻译:人类行动的捕捉(鼠标)和飞行时间感测正在变得日益流行,以进行稳健的活动分析。应用范围从行动识别到对健康应用的移动质量量化不等。尽管无标记运动捕捉在医疗等关键应用方面取得了巨大进展,但基于标记的系统,特别是活动标记系统,仍然被视为金标准。但是,两种模式都存在若干实际挑战,例如可见度、跟踪错误,以及只需保持标记设置便捷,记录移动时使用减少的标记设置。这意味着某些联合地点甚至不会被标记起来,使全体运动的下游分析具有挑战性。为了解决这一差距,我们首先将重建无标记的联合数据作为错误的线性反问题提出问题。我们将通过将它投射到人类行动的方形,为某一特定行动恢复缺失的连接点,通过优化深层自动电解码的潜层空间代表实现这一点。对mocape 和 Kinect 数据集的实验清楚地表明,拟议的方法在恢复缺失联合体的行动和动态方面表现得非常好。我们将公开发布所有代码和模型。