The latest trends in inverse rendering techniques for reconstruction use neural networks to learn 3D representations as neural fields. NeRF-based techniques fit multi-layer perceptrons (MLPs) to a set of training images to estimate a radiance field which can then be rendered from any virtual camera by means of volume rendering algorithms. Major drawbacks of these representations are the lack of well-defined surfaces and non-interactive rendering times, as wide and deep MLPs must be queried millions of times per single frame. These limitations have recently been singularly overcome, but managing to accomplish this simultaneously opens up new use cases. We present KiloNeuS, a new neural object representation that can be rendered in path-traced scenes at interactive frame rates. KiloNeuS enables the simulation of realistic light interactions between neural and classic primitives in shared scenes, and it demonstrably performs in real-time with plenty of room for future optimizations and extensions.
翻译:重建反向转换技术的最新趋势是利用神经网络学习3D表示作为神经领域。 NERF 技术将多层光谱(MLPs)与一套培训图像相匹配,以估计一个光谱场,然后通过量谱转换算法从任何虚拟相机中得出。这些表达方式的主要缺点是缺乏清晰的表面和非互动的交接时间,因为每个框架必须询问宽广和深厚的MLP数以百万倍的速度。这些限制最近被单数地克服了,但能够同时完成这些限制又打开了新的使用案例。我们展示了KiloNeuS,这是一个新的神经物体表达方式,可以用互动框架速率在路边的场景中实现。KiloNeuS能够模拟在共同场上神经和经典原始人之间现实的光互动,并且能够实时地展示出大量空间供未来优化和扩展。