Remarkable advances have been achieved recently in learning neural representations that characterize object geometry, while generating textured objects suitable for downstream applications and 3D rendering remains at an early stage. In particular, reconstructing textured geometry from images of real objects is a significant challenge -- reconstructed geometry is often inexact, making realistic texturing a significant challenge. We present Mesh2Tex, which learns a realistic object texture manifold from uncorrelated collections of 3D object geometry and photorealistic RGB images, by leveraging a hybrid mesh-neural-field texture representation. Our texture representation enables compact encoding of high-resolution textures as a neural field in the barycentric coordinate system of the mesh faces. The learned texture manifold enables effective navigation to generate an object texture for a given 3D object geometry that matches to an input RGB image, which maintains robustness even under challenging real-world scenarios where the mesh geometry approximates an inexact match to the underlying geometry in the RGB image. Mesh2Tex can effectively generate realistic object textures for an object mesh to match real images observations towards digitization of real environments, significantly improving over previous state of the art.
翻译:近期,在对物体几何特征学习神经表征方面取得了显著进展,而生成适用于下游应用和三维渲染的纹理对象仍处于早期阶段。特别地,从真实物体的图像重建纹理几何是一个重大挑战——重建几何通常不精确,使得实现逼真纹理成为一个重大挑战。我们提出了Mesh2Tex,它通过利用混合网格-神经场纹理表示从不相关的三维物体几何和光线追踪的RGB图像的集合中学习出真实的物体纹理流形。我们的纹理表示方法使得高分辨率纹理在网格面的重心坐标系中以神经场的方式进行紧凑编码。学习出的纹理流形使得能够有效地导航,对于给定的3D物体几何, 生成与输入RGB图像相匹配的物体纹理,即使在网格几何近似于RGB图像潜在几何形状的情况下,也能保持鲁棒性。Mesh2Tex 可以有效生成逼真的物体纹理,以匹配真实图像观测结果,取得了显著的超越之前技术水平的成果,是实现真实环境数字化的重要进展。