Visual content creation has spurred a soaring interest given its applications in mobile photography and AR / VR. Style transfer and single-image 3D photography as two representative tasks have so far evolved independently. In this paper, we make a connection between the two, and address the challenging task of 3D photo stylization - generating stylized novel views from a single image given an arbitrary style. Our key intuition is that style transfer and view synthesis have to be jointly modeled for this task. To this end, we propose a deep model that learns geometry-aware content features for stylization from a point cloud representation of the scene, resulting in high-quality stylized images that are consistent across views. Further, we introduce a novel training protocol to enable the learning using only 2D images. We demonstrate the superiority of our method via extensive qualitative and quantitative studies, and showcase key applications of our method in light of the growing demand for 3D content creation from 2D image assets.
翻译:视觉内容的创造引起了人们的极大兴趣,因为它在移动摄影和AR/VR.样式传输和单一图像 3D 摄影方面的应用到目前为止已经独立地演变成两个具有代表性的任务。 在本文中,我们将两者联系起来,并处理3D光电化这一具有挑战性的任务 -- -- 从一种任意的风格的单一图像中产生星体化的新观点。我们的关键直觉是,样式转换和视图合成必须共同为这项任务建模。为此,我们提议了一个深层次的模型,从场景的点云面图示中学习具有几何学觉特征的立体化内容特征,从而产生高品质的立体化图像,在各种观点之间保持一致。此外,我们引入了一个新的培训程序,以便仅使用2D图像进行学习。我们通过广泛的定性和定量研究展示我们方法的优越性,并根据2D图像资产对3D内容创建日益增长的需求展示我们方法的主要应用。