3D pose estimation has recently gained substantial interests in computer vision domain. Existing 3D pose estimation methods have a strong reliance on large size well-annotated 3D pose datasets, and they suffer poor model generalization on unseen poses due to limited diversity of 3D poses in training sets. In this work, we propose PoseGU, a novel human pose generator that generates diverse poses with access only to a small size of seed samples, while equipping the Counterfactual Risk Minimization to pursue an unbiased evaluation objective. Extensive experiments demonstrate PoseGU outforms almost all the state-of-the-art 3D human pose methods under consideration over three popular benchmark datasets. Empirical analysis also proves PoseGU generates 3D poses with improved data diversity and better generalization ability.
翻译:3D 构成的估算最近在计算机视觉领域引起了很大的兴趣。 现有的 3D 构成的估算方法在很大程度上依赖于大尺寸的附加说明的 3D 构成的数据集,由于培训组合中3D 构成的多样化程度有限,对不可见的构成的模型的概括性很低。 在这项工作中,我们提议PoseGU, 一种新型的人类构成的生成器,它生成的多元的外形,只能获得少量的种子样本,同时使反事实风险最小化能够追求公正的评估目标。 广泛的实验显示,PoseGU几乎在所有3D 人构成的先进方法上都超越了三个流行的基准数据集。 经验性分析还证明,PoseGU 生成的3D 具有更好的数据多样性和更好的概括能力。