In this technical report, we improve the DVGO framework (called DVGOv2), which is based on Pytorch and uses the simplest dense grid representation. First, we re-implement part of the Pytorch operations with cuda, achieving 2-3x speedup. The cuda extension is automatically compiled just in time. Second, we extend DVGO to support Forward-facing and Unbounded Inward-facing capturing. Third, we improve the space time complexity of the distortion loss proposed by mip-NeRF 360 from O(N^2) to O(N). The distortion loss improves our quality and training speed. Our efficient implementation could allow more future works to benefit from the loss.
翻译:在这份技术报告中,我们改进了DVGO框架(称为DVGO2),该框架以Pytorch为基础,使用最简单的密度网格表示。首先,我们用Cuda重新实施Pytorch操作的一部分,实现2-3x加速。Cuda的扩展是及时自动汇编的。第二,我们扩大DVGO,以支持前向和无线内向定位捕获。第三,我们提高了MIp-NERF 360从O(N)2到O(N)提出的扭曲损失的空间时间复杂性。扭曲损失提高了我们的质量和培训速度。我们的有效实施可以让更多的未来工程从损失中受益。