Transferring the style from one image onto another is a popular and widely studied task in computer vision. Yet, learning-based style transfer in the 3D setting remains a largely unexplored problem. To our knowledge, we propose the first learning-based generative approach for style transfer between 3D objects. Our method allows to combine the content and style of a source and target 3D model to generate a novel shape that resembles in style the target while retaining the source content. The proposed framework can synthesize new 3D shapes both in the form of point clouds and meshes. Furthermore, we extend our technique to implicitly learn the underlying multimodal style distribution of the individual category domains. By sampling style codes from the learned distributions, we increase the variety of styles that our model can confer to a given reference object. Experimental results validate the effectiveness of the proposed 3D style transfer method on a number of benchmarks.
翻译:将图像的风格从一个图像转移到另一个图像是计算机视觉中一项广受广泛研究的任务。 然而,基于学习的3D环境中的风格转移仍是一个基本上尚未探讨的问题。 据我们所知,我们建议对3D对象之间的风格转移采用第一种基于学习的基因化方法。我们的方法可以将源的内容和风格与目标3D模型结合起来,以产生一种与目标风格相似的新形状,同时保留源内容。拟议框架可以将新的3D形状组合成点云和梅什两种形式。此外,我们推广了我们的技术,以隐含地学习单个类别的多式风格分布。我们从所学的分布中取样样式代码,我们增加了我们模型可以赋予某个特定参考对象的样式种类。实验结果验证了拟议3D样式转移方法在若干基准上的有效性。