We present a method for transferring the artistic features of an arbitrary style image to a 3D scene. Previous methods that perform 3D stylization on point clouds or meshes are sensitive to geometric reconstruction errors for complex real-world scenes. Instead, we propose to stylize the more robust radiance field representation. We find that the commonly used Gram matrix-based loss tends to produce blurry results without faithful brushstrokes, and introduce a nearest neighbor-based loss that is highly effective at capturing style details while maintaining multi-view consistency. We also propose a novel deferred back-propagation method to enable optimization of memory-intensive radiance fields using style losses defined on full-resolution rendered images. Our extensive evaluation demonstrates that our method outperforms baselines by generating artistic appearance that more closely resembles the style image. Please check our project page for video results and open-source implementations: https://www.cs.cornell.edu/projects/arf/ .
翻译:我们提出了一个将任意风格图像的艺术特征转换到 3D 场景的方法。 在点云或 meshes 上执行 3D 级立体的先前方法对于复杂真实世界场景的几何重建错误十分敏感。 相反,我们提议对更强大的亮度场面进行立体化。 我们发现,通常使用的 Gram 矩阵损失往往产生模糊的结果,没有忠实的刷子, 并引入一个最近的邻居级损失, 在保持多视图一致性的同时, 捕捉样式细节非常有效 。 我们还提出了一个新的推迟后后后再恢复方法, 以便利用完整分辨率图像定义的风格损失优化记忆密集的弧度场。 我们的广泛评估表明, 我们的方法通过产生更接近风格图像的艺术外观, 超越了基线。 请查看我们的项目页面, 了解视频结果和开源执行情况 : https://www.cs. corcell.edu/produc/arf/ 。