This paper presents a stylized novel view synthesis method. Applying state-of-the-art stylization methods to novel views frame by frame often causes jittering artifacts due to the lack of cross-view consistency. Therefore, this paper investigates 3D scene stylization that provides a strong inductive bias for consistent novel view synthesis. Specifically, we adopt the emerging neural radiance fields (NeRF) as our choice of 3D scene representation for their capability to render high-quality novel views for a variety of scenes. However, as rendering a novel view from a NeRF requires a large number of samples, training a stylized NeRF requires a large amount of GPU memory that goes beyond an off-the-shelf GPU capacity. We introduce a new training method to address this problem by alternating the NeRF and stylization optimization steps. Such a method enables us to make full use of our hardware memory capacity to both generate images at higher resolution and adopt more expressive image style transfer methods. Our experiments show that our method produces stylized NeRFs for a wide range of content, including indoor, outdoor and dynamic scenes, and synthesizes high-quality novel views with cross-view consistency.
翻译:本文展示了一种标准化的新观点合成方法。 应用最先进的艺术元化方法对新观点框架按框架进行创新观点框架时,往往会因为缺乏交叉视图的一致性而导致批发艺术品。 因此,本文件调查了三维场景元化,这为一致的新观点合成提供了强烈的感应偏差。 具体地说,我们采用了新兴神经光谱场( NeRF)作为我们选择的三维场景显示功能,以显示它们为各种场景提供高质量新观点的能力。 然而,由于从一个NerRF 提供新观点需要大量样本,因此培训一个螺旋式的NERF 需要大量GPU记忆,这超出了现成的GUPU能力。 我们引入了一种新的培训方法,通过将NRF和Stencial化优化步骤相互交替来解决这一问题。 这种方法使我们能够充分利用我们的硬件内存能力来生成更高分辨率的图像,并采用更清晰的图像风格传输方法。 我们的实验显示,我们的方法为广泛的内容生成了高基化的NERF,, 包括室内、室、 和动态、 合成和综合, 以及高质量的图像。