Creating realistic virtual assets is a time-consuming process: it usually involves an artist designing the object, then spending a lot of effort on tweaking its appearance. Intricate details and certain effects, such as subsurface scattering, elude representation using real-time BRDFs, making it impossible to fully capture the appearance of certain objects. Inspired by the recent progress of neural rendering, we propose an approach for capturing real-world objects in everyday environments faithfully and fast. We use a novel neural representation to reconstruct volumetric effects, such as translucent object parts, and preserve photorealistic object appearance. To support real-time rendering without compromising rendering quality, our model uses a grid of features and a small MLP decoder that is transpiled into efficient shader code with interactive framerates. This leads to a seamless integration of the proposed neural assets with existing mesh environments and objects. Thanks to the use of standard shader code rendering is portable across many existing hardware and software systems.
翻译:创建符合现实的虚拟资产是一个耗时的过程: 它通常涉及一位设计对象的艺术家, 然后花大量精力来调整它的外观。 细化细节和某些效果, 比如地下散射, 使用实时 BRDF 无法显示某些物体的外观。 由于神经转换的最近进展, 我们提出了一个在日常环境中忠实和快速地捕捉真实世界物体的方法 。 我们使用新颖的神经代表来重建体积效应, 比如半透明物体部件, 并保存光现实的外观 。 为了支持实时成像, 支持实时成像, 我们的模型使用一个功能网格和一个小的 MLP 解码, 以交互式框架仪转换成高效的遮光码 。 这导致拟议的神经资产与现有的网状环境和天体的无缝合 。 由于使用标准遮光码的编码, 在许多现有的硬件和软件系统中是可移植的 。