High-quality 3D human body reconstruction requires high-fidelity and large-scale training data and appropriate network design that effectively exploits the high-resolution input images. To tackle these problems, we propose a simple yet effective 3D human digitization method called 2K2K, which constructs a large-scale 2K human dataset and infers 3D human models from 2K resolution images. The proposed method separately recovers the global shape of a human and its details. The low-resolution depth network predicts the global structure from a low-resolution image, and the part-wise image-to-normal network predicts the details of the 3D human body structure. The high-resolution depth network merges the global 3D shape and the detailed structures to infer the high-resolution front and back side depth maps. Finally, an off-the-shelf mesh generator reconstructs the full 3D human model, which are available at https://github.com/SangHunHan92/2K2K. In addition, we also provide 2,050 3D human models, including texture maps, 3D joints, and SMPL parameters for research purposes. In experiments, we demonstrate competitive performance over the recent works on various datasets.
翻译:高质量的3D人体重建需要高保真度和大规模的训练数据以及适当的网络设计,有效利用高分辨率输入图像。为了解决这些问题,我们提出了一个简单而有效的3D人体数字化方法,称为2K2K,它构建了一个大规模的2K人体数据集,并从2K分辨率图像中推断出3D人体模型。所提出的方法分别恢复人体的全局形状和细节。低分辨率深度网络预测低分辨率图像中的全局结构,而部分图像到法向量网络预测3D人体结构的细节。高分辨率深度网络将全局3D形状与详细结构合并,推断出高分辨率的正面和背面的深度图。最后,一个现成的网格生成器重建完整的3D人体模型,可在https://github.com/SangHunHan92/2K2K上获得。此外,我们还提供2,050个3D人体模型,包括纹理贴图、3D关节和SMPL参数用于研究目的。在实验中,我们在各种数据集上展示了与最近的研究成果相当的性能。