We introduce Neural Point Light Fields that represent scenes implicitly with a light field living on a sparse point cloud. Combining differentiable volume rendering with learned implicit density representations has made it possible to synthesize photo-realistic images for novel views of small scenes. As neural volumetric rendering methods require dense sampling of the underlying functional scene representation, at hundreds of samples along a ray cast through the volume, they are fundamentally limited to small scenes with the same objects projected to hundreds of training views. Promoting sparse point clouds to neural implicit light fields allows us to represent large scenes effectively with only a single implicit sampling operation per ray. These point light fields are as a function of the ray direction, and local point feature neighborhood, allowing us to interpolate the light field conditioned training images without dense object coverage and parallax. We assess the proposed method for novel view synthesis on large driving scenarios, where we synthesize realistic unseen views that existing implicit approaches fail to represent. We validate that Neural Point Light Fields make it possible to predict videos along unseen trajectories previously only feasible to generate by explicitly modeling the scene.
翻译:我们引入隐含着光场的神经点光场, 暗含着光场的场景。 将不同的体积与已知的隐含密度表示结合起来, 使得能够将光现实图像合成为小场景的新观点。 由于神经体积转换方法要求对基本功能场景代表进行密集的取样, 沿着射线在体积上展示数百个样本, 它们基本上限于小场景, 预测成数百个培训视图的相同对象。 将微点云引入神经隐含的光场, 使我们能够有效地代表大场景, 只有对每个射线进行单一的隐性取样操作。 这些点光场是射线方向和局部点特征的功能, 使我们能够对光场景进行内插插图, 没有密集的物体覆盖和副作用。 我们评估了大型驱动场景的新视图合成方法, 在那里我们合成了现有隐含方法无法代表的现实的无形观点。 我们确认, 神经点光场可以预测在看不见的轨迹上的所有视频, 之前只能通过清晰的模型生成场景。