We present a real-time neural radiance caching method for path-traced global illumination. Our system is designed to handle fully dynamic scenes, and makes no assumptions about the lighting, geometry, and materials. The data-driven nature of our approach sidesteps many difficulties of caching algorithms, such as locating, interpolating, and updating cache points. Since pretraining neural networks to handle novel, dynamic scenes is a formidable generalization challenge, we do away with pretraining and instead achieve generalization via adaptation, i.e. we opt for training the radiance cache while rendering. We employ self-training to provide low-noise training targets and simulate infinite-bounce transport by merely iterating few-bounce training updates. The updates and cache queries incur a mild overhead -- about 2.6ms on full HD resolution -- thanks to a streaming implementation of the neural network that fully exploits modern hardware. We demonstrate significant noise reduction at the cost of little induced bias, and report state-of-the-art, real-time performance on a number of challenging scenarios.
翻译:我们提出了一个实时神经光亮缓冲方法,用于跟踪路径的全局照明。我们的系统旨在处理完全动态的场景,对照明、几何学和材料不作任何假设。我们的方法的数据驱动性质使缓冲算法的许多困难退一步,例如定位、内插和更新缓存点。由于对处理新事物的神经网络进行预先培训,动态场景是一个巨大的概括性挑战,我们取消了预培训,而是通过适应实现普遍化,也就是说,我们选择了培训光亮缓存,在形成时选择培训。我们采用自我培训来提供低音培训目标和模拟无限的无线传输,仅仅重复微调培训更新。更新和缓冲查询产生了轻微的间接问题 -- -- 大约2.6米的完全HD分辨率 -- -- 原因是全面开发现代硬件的神经网络不断运行。我们以微小的诱导偏差为代价大幅降低噪音,并报告一些富有挑战的情景的状态实时表现。