We propose a method for constructing generative models of 3D objects from a single 3D mesh. Our method produces a 3D morphable model that represents shape and albedo in terms of Gaussian processes. We define the shape deformations in physical (3D) space and the albedo deformations as a combination of physical-space and color-space deformations. Whereas previous approaches have typically built 3D morphable models from multiple high-quality 3D scans through principal component analysis, we build 3D morphable models from a single scan or template. We demonstrate the utility of these models in the domain of face modeling through inverse rendering and registration tasks. Specifically, we show that our approach can be used to perform face recognition using only a single 3D scan (one scan total, not one per person), and further demonstrate how multiple scans can be incorporated to improve performance without requiring dense correspondence. Our approach enables the synthesis of 3D morphable models for 3D object categories where dense correspondence between multiple scans is unavailable. We demonstrate this by constructing additional 3D morphable models for fish and birds and use them to perform simple inverse rendering tasks.
翻译:我们建议用一个3D网格来构建3D对象的基因模型。 我们的方法产生一个3D变形模型, 代表高斯进程中的形状和反照。 我们定义物理( 3D) 空间中的形状变形和反照变形是物理- 空间和色- 空间变形的组合。 虽然以往的方法通常通过主要部件分析, 从多个高质量的3D扫描中构建3D变形模型, 我们从一个扫描或模板中构建3D变形模型。 我们通过反向翻版和登记任务来展示这些模型在面部建模领域的效用。 具体地说, 我们表明, 我们的方法可以只使用一个3D扫描( 一次扫描总数, 而不是每人一次) 来进行面部识别, 并进一步展示如何将多重扫描结合, 来提高性能, 而不需要密集的通信。 我们的方法使得3D对象类别中的3D变形模型能够合成, 在那里无法进行多次扫描或模板的密度对应。 我们通过为鱼类和鸟类建造额外的3D型变形模型来进行简单的反向任务。