We propose a novel framework to reconstruct super-resolution human shape from a single low-resolution input image. The approach overcomes limitations of existing approaches that reconstruct 3D human shape from a single image, which require high-resolution images together with auxiliary data such as surface normal or a parametric model to reconstruct high-detail shape. The proposed framework represents the reconstructed shape with a high-detail implicit function. Analogous to the objective of 2D image super-resolution, the approach learns the mapping from a low-resolution shape to its high-resolution counterpart and it is applied to reconstruct 3D shape detail from low-resolution images. The approach is trained end-to-end employing a novel loss function which estimates the information lost between a low and high-resolution representation of the same 3D surface shape. Evaluation for single image reconstruction of clothed people demonstrates that our method achieves high-detail surface reconstruction from low-resolution images without auxiliary data. Extensive experiments show that the proposed approach can estimate super-resolution human geometries with a significantly higher level of detail than that obtained with previous approaches when applied to low-resolution images.
翻译:我们提出了一个从单一低分辨率输入图像中重建超分辨率人的形状的新框架。该方法克服了从单一图像中重建3D人的形状的现有方法的局限性,后者需要高分辨率图像和辅助数据,如表面常态或重塑高详细形状的参数模型。拟议框架代表了以高详细隐含功能重建的形状。从2D图像超分辨率的模拟到2D图像超分辨率的目标,该方法从低分辨率形状到高分辨率对应方的绘图,该方法用于从低分辨率图像中重建3D的形状细节。该方法经过培训,采用新的损失功能,对同一3D表面形状的低分辨率和高分辨率代表之间丢失的信息进行估计。对布衣人单一图像重建的评估表明,我们的方法可以在没有辅助数据的情况下从低分辨率图像中实现高清晰度地表重建。广泛的实验表明,拟议方法可以对超分辨率人类的形状作出比以往应用低分辨率图像方法获得的详细程度高得多的评估。