The rendering procedure used by neural radiance fields (NeRF) samples a scene with a single ray per pixel and may therefore produce renderings that are excessively blurred or aliased when training or testing images observe scene content at different resolutions. The straightforward solution of supersampling by rendering with multiple rays per pixel is impractical for NeRF, because rendering each ray requires querying a multilayer perceptron hundreds of times. Our solution, which we call "mip-NeRF" (a la "mipmap"), extends NeRF to represent the scene at a continuously-valued scale. By efficiently rendering anti-aliased conical frustums instead of rays, mip-NeRF reduces objectionable aliasing artifacts and significantly improves NeRF's ability to represent fine details, while also being 7% faster than NeRF and half the size. Compared to NeRF, mip-NeRF reduces average error rates by 16% on the dataset presented with NeRF and by 60% on a challenging multiscale variant of that dataset that we present. Mip-NeRF is also able to match the accuracy of a brute-force supersampled NeRF on our multiscale dataset while being 22x faster.
翻译:神经弧度场( NERF) 使用的转换程序对神经弧度场区( NERF) 以每像素的单一射线取样场景, 因此在培训或测试图像观察不同分辨率的场景内容时, 可能会产生过度模糊或化名的场景。 使用多个像素的多射线进行超光谱取样的直截了当的解决方案对 NERF 来说是不切实际的, 因为每个光线线需要查询多层的感应器数百倍。 我们称之为“ MIP- NERF ” 的解决方案将NERF 扩展为持续估量的场景。 通过高效地制作反反像的锥体结壳而不是射线, MIP- NERF 减少可反对的别产物, 大大改善 NERF 代表精细细节的能力, 同时比 NERF 还要快7% 和 一半 。 与 NERF 相比, MIP- NERF 将平均误差率率率降低16 %, 在与 NERF 相比, 60% 具有挑战性的多尺度的多尺度变量中, 我们所展示的BRF 。 MIP- RF 的多级数据也能够匹配。