3D human pose estimation is a key component of clinical monitoring systems. The clinical applicability of deep pose estimation models, however, is limited by their poor generalization under domain shifts along with their need for sufficient labeled training data. As a remedy, we present a novel domain adaptation method, adapting a model from a labeled source to a shifted unlabeled target domain. Our method comprises two complementary adaptation strategies based on prior knowledge about human anatomy. First, we guide the learning process in the target domain by constraining predictions to the space of anatomically plausible poses. To this end, we embed the prior knowledge into an anatomical loss function that penalizes asymmetric limb lengths, implausible bone lengths, and implausible joint angles. Second, we propose to filter pseudo labels for self-training according to their anatomical plausibility and incorporate the concept into the Mean Teacher paradigm. We unify both strategies in a point cloud-based framework applicable to unsupervised and source-free domain adaptation. Evaluation is performed for in-bed pose estimation under two adaptation scenarios, using the public SLP dataset and a newly created dataset. Our method consistently outperforms various state-of-the-art domain adaptation methods, surpasses the baseline model by 31%/66%, and reduces the domain gap by 65%/82%. Source code is available at https://github.com/multimodallearning/da-3dhpe-anatomy.
翻译:3D 人体表面估计是临床监测系统的一个关键组成部分。但是,深海表面估计模型的临床适用性受到限制,因为其广度作用在域变换中,而且需要贴有标签的培训数据。作为一种补救措施,我们提出了一种新的域适应方法,将一个由标签来源的模型改造成一个转移的未贴标签目标领域。我们的方法包括基于人类解剖学先前知识的两个互补适应战略。首先,我们通过将预测限制在解剖和无源域调整的空间来指导目标领域的学习进程。为此,我们将先前的知识嵌入一个解剖学损失功能,以惩罚不对称的肢体长度、不可信的骨头长度和不可信的联合角度。作为一种补救措施,我们提出了一个新的域调整方法,将假标签按照其解剖的光度进行自我培训,并将概念纳入“教师感官范”。我们把这两项战略统一在一个基于云的点框架内,用于不超超和无源域域调整。我们用床面的估算在两种适应情景下进行估算,使用SLP-3L-MIL-ML-ML-ML-rodeal-deal-deal-deal dal dal dalmastrism dal dal dism dal droom droom droomd the droutd the dism dismd droutd routd routdal-s routd ex-s routdaldaldaldal-xdal-xtal-xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxd_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx