Numerical integration is a foundational technique in scientific computing and is at the core of many computer vision applications. Among these applications, neural volume rendering has recently been proposed as a new paradigm for view synthesis, achieving photorealistic image quality. However, a fundamental obstacle to making these methods practical is the extreme computational and memory requirements caused by the required volume integrations along the rendered rays during training and inference. Millions of rays, each requiring hundreds of forward passes through a neural network are needed to approximate those integrations with Monte Carlo sampling. Here, we propose automatic integration, a new framework for learning efficient, closed-form solutions to integrals using coordinate-based neural networks. For training, we instantiate the computational graph corresponding to the derivative of the network. The graph is fitted to the signal to integrate. After optimization, we reassemble the graph to obtain a network that represents the antiderivative. By the fundamental theorem of calculus, this enables the calculation of any definite integral in two evaluations of the network. Applying this approach to neural rendering, we improve a tradeoff between rendering speed and image quality: improving render times by greater than 10 times with a tradeoff of slightly reduced image quality.
翻译:数字整合是科学计算的基础技术,是许多计算机视觉应用的核心。在这些应用中,最近提出了神经量转换,作为视觉合成的新范例,实现光现实图像质量;然而,使这些方法实用的一个根本障碍是,在培训和推断过程中,在完成的射线上,要求的体积整合造成了极端的计算和记忆要求。数百万个射线,每个光线需要数百个前方通过神经网络传送,才能接近与蒙特卡洛取样的这些整合。在这里,我们提议自动整合,一个利用基于协调的神经网络为集成体学习高效的封闭式解决方案的新框架。在培训方面,我们即时使用与网络衍生物相对的计算图。该图与信号相适应。在优化后,我们重新对图表进行重新组合,以获得一个代表反射力的网络。根据微调的基本理论,可以计算出网络两次评估中的任何明确的组成部分。在应用这一方法进行神经转换时,我们改进了速度和图像质量的微小交易。