It is very challenging to accurately reconstruct sophisticated human geometry caused by various poses and garments from a single image. Recently, works based on pixel-aligned implicit function (PIFu) have made a big step and achieved state-of-the-art fidelity on image-based 3D human digitization. However, the training of PIFu relies heavily on expensive and limited 3D ground truth data (i.e. synthetic data), thus hindering its generalization to more diverse real world images. In this work, we propose an end-to-end self-supervised network named SelfPIFu to utilize abundant and diverse in-the-wild images, resulting in largely improved reconstructions when tested on unconstrained in-the-wild images. At the core of SelfPIFu is the depth-guided volume-/surface-aware signed distance fields (SDF) learning, which enables self-supervised learning of a PIFu without access to GT mesh. The whole framework consists of a normal estimator, a depth estimator, and a SDF-based PIFu and better utilizes extra depth GT during training. Extensive experiments demonstrate the effectiveness of our self-supervised framework and the superiority of using depth as input. On synthetic data, our Intersection-Over-Union (IoU) achieves to 93.5%, 18% higher compared with PIFuHD. For in-the-wild images, we conduct user studies on the reconstructed results, the selection rate of our results is over 68% compared with other state-of-the-art methods.
翻译:准确重建由不同面貌和服装从单一图像中产生的精密人类几何学是非常困难的。 最近,基于像素结盟隐含功能(PIFu)的工程在基于图像的3D人数字化方面迈出了一大步并实现了最先进的忠诚。然而,对PIFu的培训严重依赖昂贵和有限的3D地面真实数据(即合成数据),从而阻碍了它向更多样化的现实世界图像的普及。在这项工作中,我们提议建立一个名为SefPIFu的端到端自我监督网络,以利用丰富和多样化的虚拟图像(PIFu),从而在未经控制的情况下对图像进行了测试,从而大大改进了重建。在SelfPIFu的核心是深度指导量/地表认知签名的远程域学习(SDFF),从而无法自我监督地学习更多样化的PIFu。整个框架包括一个正常的估算器、深度测量器、基于SDFVI的更高图像(PIFVI)的图像,导致在未经控制的图像分析的深度测试中,在18次的深度的自我分析中,将我们用户的自我定位的自我定位的自我定位的自我定位的自我定位的自我测试中, 展示的自我展示的自我展示的自我测试,在自我测试中展示的自我测试中展示的自我测试中,是展示的自我测试的自我测试。