We present Block-NeRF, a variant of Neural Radiance Fields that can represent large-scale environments. Specifically, we demonstrate that when scaling NeRF to render city-scale scenes spanning multiple blocks, it is vital to decompose the scene into individually trained NeRFs. This decomposition decouples rendering time from scene size, enables rendering to scale to arbitrarily large environments, and allows per-block updates of the environment. We adopt several architectural changes to make NeRF robust to data captured over months under different environmental conditions. We add appearance embeddings, learned pose refinement, and controllable exposure to each individual NeRF, and introduce a procedure for aligning appearance between adjacent NeRFs so that they can be seamlessly combined. We build a grid of Block-NeRFs from 2.8 million images to create the largest neural scene representation to date, capable of rendering an entire neighborhood of San Francisco.
翻译:我们展示了可以代表大型环境的神经辐射场的布局- NERF。 具体地说, 我们证明, 当放大 NERF 使城市规模的场景跨越多个区块时, 将场景分解成个人训练的 NERF 至关重要。 这种分解分解分解使时间从场景大小与场景大小相交, 使空间向任意大型环境过渡, 并允许每个区块对环境进行更新。 我们采用了几项建筑变革, 使 NERF 能够对不同环境条件下的几个月中采集的数据进行坚固。 我们增加了外观嵌嵌嵌入, 学习了外观, 并控制了每个单独的 NERF, 并引入了将相邻的 NERF 之间的外观相互匹配程序, 以便它们能够无缝地结合在一起。 我们从280万张图像中构建了区块- NERF 的网格, 以创建迄今为止最大的神经场景代表, 能够让整个旧金山 。