We tackle a 3D scene stylization problem - generating stylized images of a scene from arbitrary novel views given a set of images of the same scene and a reference image of the desired style as inputs. Direct solution of combining novel view synthesis and stylization approaches lead to results that are blurry or not consistent across different views. We propose a point cloud-based method for consistent 3D scene stylization. First, we construct the point cloud by back-projecting the image features to the 3D space. Second, we develop point cloud aggregation modules to gather the style information of the 3D scene, and then modulate the features in the point cloud with a linear transformation matrix. Finally, we project the transformed features to 2D space to obtain the novel views. Experimental results on two diverse datasets of real-world scenes validate that our method generates consistent stylized novel view synthesis results against other alternative approaches.
翻译:我们处理一个三维场景星系化问题, 即从任意的新观点中生成一个场景的立体图像, 以一组相同场景的图像和理想风格的参考图像作为投入。 将新观点合成和立体化方法相结合的直接解决方案导致不同观点之间模糊或不一致的结果。 我们提出一个基于点云的一致三维场景星系化方法。 首先, 我们通过向三维空间回射图像特征来构建点云。 第二, 我们开发点云集模块, 收集三维场景的风格信息, 然后用线性变形矩阵调节点云的特征。 最后, 我们将转换后的功能投射到二维空间, 以获取新观点。 现实世界场景两个不同的数据集的实验结果证实, 我们的方法与其他替代方法形成一致的立体新观点合成结果 。