The goal of 3D pose transfer is to transfer the pose from the source mesh to the target mesh while preserving the identity information (e.g., face, body shape) of the target mesh. Deep learning-based methods improved the efficiency and performance of 3D pose transfer. However, most of them are trained under the supervision of the ground truth, whose availability is limited in real-world scenarios. In this work, we present X-DualNet, a simple yet effective approach that enables unsupervised 3D pose transfer. In X-DualNet, we introduce a generator $G$ which contains correspondence learning and pose transfer modules to achieve 3D pose transfer. We learn the shape correspondence by solving an optimal transport problem without any key point annotations and generate high-quality meshes with our elastic instance normalization (ElaIN) in the pose transfer module. With $G$ as the basic component, we propose a cross consistency learning scheme and a dual reconstruction objective to learn the pose transfer without supervision. Besides that, we also adopt an as-rigid-as-possible deformer in the training process to fine-tune the body shape of the generated results. Extensive experiments on human and animal data demonstrate that our framework can successfully achieve comparable performance as the state-of-the-art supervised approaches.
翻译:深度学习方法提高了3D转换的效率和性能;然而,大多数都是在地面真相的监督下培训的,在现实世界情景中,其可用性有限;在这项工作中,我们提出X-DualNet,这是一个简单而有效的方法,使3D转换能够不受监督地进行。在X-DualNet中,我们引入了一个发电机$G$,其中包含通信学习和提供传输模块,以实现3D的3D转换。我们通过在没有任何关键说明的情况下解决最佳运输问题来学习成型通信,并生成高品质的中继器,在配置传输模块中,我们用弹性测试器(ELAIN)来进行。用G$作为基本组成部分,我们提出一个交叉一致学习计划和双重重建目标,以学习没有监督的3D转换。此外,我们还在X-DNet中采用了一个包含通信学习和提供传输模块的发电机$G$(G$),用于实现3D的3D转换。我们学习成型通信。我们学习成型通信通信通信,通过在培训过程中通过解决最佳运输问题而无需任何关键点说明,并产生高品质的模化的图像,从而成功展示了我们所监督的动物结果的机构,从而成功展示。</s>