3D human pose estimation from a single image is still a challenging problem despite the large amount of work that has been performed in this field. Generally, most methods directly use neural networks and ignore certain constraints (e.g., reprojection constraints, joint angle, and bone length constraints). While a few methods consider these constraints but train the network separately, they cannot effectively solve the depth ambiguity problem. In this paper, we propose a GAN-based model for 3D human pose estimation, in which a reprojection network is employed to learn the mapping of the distribution from 3D poses to 2D poses, and a discriminator is employed for 2D-3D consistency discrimination. We adopt a novel strategy to synchronously train the generator, the reprojection network and the discriminator. Furthermore, inspired by the typical kinematic chain space (KCS) matrix, we introduce a weighted KCS matrix and take it as one of the discriminator's inputs to impose joint angle and bone length constraints. The experimental results on Human3.6M show that our method significantly outperforms state-of-the-art methods in most cases.
翻译:暂无翻译