We present Progressively Deblurring Radiance Field (PDRF), a novel approach to efficiently reconstruct high quality radiance fields from blurry images. While current State-of-The-Art (SoTA) scene reconstruction methods achieve photo-realistic rendering results from clean source views, their performances suffer when the source views are affected by blur, which is commonly observed for images in the wild. Previous deblurring methods either do not account for 3D geometry, or are computationally intense. To addresses these issues, PDRF, a progressively deblurring scheme in radiance field modeling, accurately models blur by incorporating 3D scene context. PDRF further uses an efficient importance sampling scheme, which results in fast scene optimization. Specifically, PDRF proposes a Coarse Ray Renderer to quickly estimate voxel density and feature; a Fine Voxel Renderer is then used to achieve high quality ray tracing. We perform extensive experiments and show that PDRF is 15X faster than previous SoTA while achieving better performance on both synthetic and real scenes.
翻译:我们展示了一种从模糊图像中高效重建高质量亮度场的新型方法,即“逐渐淡化辐射场 ” ( PDRF ), 这是一种从模糊图像中高效重建高质量亮度场域的新办法。 虽然当前的“艺术状态”现场重建方法能够从干净的源视图中产生光现实效果,但当源视图受到模糊影响时,其性能就会受到影响,这种模糊在野外图像中通常会观察到。 以前的“ 模糊” 方法要么没有3D 几何法,要么在计算上十分密集。 为了解决这些问题, PDRF 是一个在亮度场模型中逐渐脱色的计划,精确的模型被包含在3D 场景背景中模糊。 PDRF 进一步使用高效的重要取样方法, 从而导致快速的场景优化。 具体地说, PDRF 提议使用一个粗鲁射线来快速估计 voxel 密度和特征; 然后使用一个 Fine Voxel Renderer 来进行高质量的光追踪。 我们进行了广泛的实验, 并表明DRF 比以前的SDRF 15X比以前的 SoTA快15X更快, 同时在合成场和真实场上取得更好的表现。