We apply style transfer on mesh reconstructions of indoor scenes. This enables VR applications like experiencing 3D environments painted in the style of a favorite artist. Style transfer typically operates on 2D images, making stylization of a mesh challenging. When optimized over a variety of poses, stylization patterns become stretched out and inconsistent in size. On the other hand, model-based 3D style transfer methods exist that allow stylization from a sparse set of images, but they require a network at inference time. To this end, we optimize an explicit texture for the reconstructed mesh of a scene and stylize it jointly from all available input images. Our depth- and angle-aware optimization leverages surface normal and depth data of the underlying mesh to create a uniform and consistent stylization for the whole scene. Our experiments show that our method creates sharp and detailed results for the complete scene without view-dependent artifacts. Through extensive ablation studies, we show that the proposed 3D awareness enables style transfer to be applied to the 3D domain of a mesh. Our method can be used to render a stylized mesh in real-time with traditional rendering pipelines.
翻译:在室内场景的网状重建中我们应用了样式转换。 这让 VR 应用程序能够像体验以最喜爱艺术家的风格绘制的 3D 环境那样的3D 环境。 样式转换通常以 2D 图像为主, 使网状图象的丝质化具有挑战性。 当对各种外形进行优化时, 星体模式会拉伸, 大小不一。 另一方面, 模型基底的 3D 风格转换方法允许从一个稀少的图像组中产生星状化, 但是它们需要在一个参照时间的网络。 为此, 我们优化了一个清晰的图像结构, 将它从所有可用的输入图像中同步化。 我们的深度深度和角度优化利用了基础网状的正常和深度数据, 为整个场景区创建统一和一致的星体化。 我们的实验显示, 我们的方法为完整场景提供了清晰和详细的结果, 没有视觉依赖的艺术品。 我们通过广泛的反动研究, 显示, 拟议的 3D 认识可以将样式转换方式应用到一个传统的流式传输到 mesh 域域。 我们的方法可以用传统管道来将Stylutal 。