We propose a novel technique for producing high-quality 3D models that match a given target object image or scan. Our method is based on retrieving an existing shape from a database of 3D models and then deforming its parts to match the target shape. Unlike previous approaches that independently focus on either shape retrieval or deformation, we propose a joint learning procedure that simultaneously trains the neural deformation module along with the embedding space used by the retrieval module. This enables our network to learn a deformation-aware embedding space, so that retrieved models are more amenable to match the target after an appropriate deformation. In fact, we use the embedding space to guide the shape pairs used to train the deformation module, so that it invests its capacity in learning deformations between meaningful shape pairs. Furthermore, our novel part-aware deformation module can work with inconsistent and diverse part-structures on the source shapes. We demonstrate the benefits of our joint training not only on our novel framework, but also on other state-of-the-art neural deformation modules proposed in recent years. Lastly, we also show that our jointly-trained method outperforms a two-step deformation-aware retrieval that uses direct optimization instead of neural deformation or a pre-trained deformation module.
翻译:我们提出一种新型技术,用于制作高质量的三维模型,与特定目标对象图像或扫描相匹配。我们的方法是基于从三维模型数据库中检索现有形状,然后将其部件变形,以匹配目标形状。与以往独立侧重于形状检索或变形的方法不同,我们提议了一个联合学习程序,即同时培训神经畸形模块和检索模块使用的嵌入空间。这使我们的网络能够学习一个变形-觉悟嵌入空间,这样检索到的模型在适当变形后更适合匹配目标。事实上,我们利用嵌入空间来指导用于训练变形模块的形状配对,以便将其能力投资于在有意义的形状配对之间学习变形。此外,我们新的部分变形模块可以与源形状上不同的部分结构同时运作。我们不仅向我们的新框架展示了联合培训的好处,而且对其他状态的神经变形模块也展示了在近些年提出的变形模块中更适合匹配的目标。最后,我们还展示了我们联合改造的变形模式以直接变形的变形模式取代了两种变形模式。