Physically based rendering of complex scenes can be prohibitively costly with a potentially unbounded and uneven distribution of complexity across the rendered image. The goal of an ideal level of detail (LoD) method is to make rendering costs independent of the 3D scene complexity, while preserving the appearance of the scene. However, current prefiltering LoD methods are limited in the appearances they can support due to their reliance of approximate models and other heuristics. We propose the first comprehensive multi-scale LoD framework for prefiltering 3D environments with complex geometry and materials (e.g., the Disney BRDF), while maintaining the appearance with respect to the ray-traced reference. Using a multi-scale hierarchy of the scene, we perform a data-driven prefiltering step to obtain an appearance phase function and directional coverage mask at each scale. At the heart of our approach is a novel neural representation that encodes this information into a compact latent form that is easy to decode inside a physically based renderer. Once a scene is baked out, our method requires no original geometry, materials, or textures at render time. We demonstrate that our approach compares favorably to state-of-the-art prefiltering methods and achieves considerable savings in memory for complex scenes.
翻译:以物理为基础的复杂场景的布局成本可能高得令人望而却步。 理想的详细程度(LOD)方法的目标是让成本独立于三维场景的复杂性,同时保持场景的外观。 然而,目前预过滤的LoD方法在外观方面是有限的,因为它们依赖近似模型和其他超自然学。 我们提议第一个综合的多尺度LoD框架,用于以复杂的几何和材料(例如Disney BRDF)预先过滤三维环境,同时保持光谱查询参考的外观。 我们使用多尺度的场景结构,我们执行数据驱动的预过滤步骤,以获得每个规模的外观阶段功能和方向覆盖面罩。 我们的方法核心是新颖的神经代表,将这一信息编码成一种易于在物理成型成型的外观的外观形态(例如Disney BRDF),同时保持光谱的外观。 我们的方法不需要原始的几何测量、材料或文字定位参考的参照度。 我们使用数据驱动的预估测度方法, 以精确地显示我们的记忆前期和图像。