Volume data is commonly found in many scientific disciplines, like medicine, physics, and biology. Experts rely on robust scientific visualization techniques to extract valuable insights from the data. Recent years have shown path tracing to be the preferred approach for volumetric rendering, given its high levels of realism. However, real-time volumetric path tracing often suffers from stochastic noise and long convergence times, limiting interactive exploration. In this paper, we present a novel method to enable real-time global illumination for volume data visualization. We develop Photon Field Networks -- a phase-function-aware, multi-light neural representation of indirect volumetric global illumination. The fields are trained on multi-phase photon caches that we compute a priori. Training can be done within seconds, after which the fields can be used in various rendering tasks. To showcase their potential, we develop a custom neural path tracer, with which our photon fields achieve interactive framerates even on large datasets. We conduct in-depth evaluations of the method's performance, including visual quality, stochastic noise, inference and rendering speeds, and accuracy regarding illumination and phase function awareness. Results are compared to ray marching, path tracing and photon mapping. Our findings show that Photon Field Networks can faithfully represent indirect global illumination across the phase spectrum while exhibiting less stochastic noise and rendering at a significantly faster rate than traditional methods.
翻译:体积数据在许多科学领域中普遍存在,如医学、物理和生物学。专家们依赖于强大的科学可视化技术从数据中提取有价值的洞察。近年来,路径追踪被认为是用于体积渲染的首选方法,因为它具有很高的现实感。然而,实时体积路径追踪常常受到随机噪声和收敛时间长的限制,这限制了交互式探索。在本文中,我们提出了一种新方法,以实现体积数据可视化的实时全局照明。我们开发出光子场网络 -- 一种考虑相函数的、多光源的体积间接全局照明的神经表示。这些场在我们预先计算的多相光子缓存上进行训练。训练可以在几秒钟内完成,之后这些场可以用于各种渲染任务。为展示它们的潜力,我们开发了一款定制的神经路径追踪器,用此追踪器进行渲染时,我们的光子场即使对于大型数据集,也能实现交互帧速率。我们对该方法的性能进行了深入评估,包括视觉质量、随机噪声、推理和渲染速度,以及对于照明和相函数意识的准确性。结果与射线行进、路径追踪和光子映射进行了比较。我们的发现表明,光子场网络可以在相位光谱中忠实地表示间接全局照明,同时表现出比传统方法更少的随机噪声和更快的渲染速度。