We propose a learning-based multi-view stereo (MVS) method in scattering media, such as fog or smoke, with a novel cost volume, called the dehazing cost volume. Images captured in scattering media are degraded due to light scattering and attenuation caused by suspended particles. This degradation depends on scene depth; thus, it is difficult for traditional MVS methods to evaluate photometric consistency because the depth is unknown before three-dimensional (3D) reconstruction. The dehazing cost volume can solve this chicken-and-egg problem of depth estimation and image restoration by computing the scattering effect using swept planes in the cost volume. We also propose a method of estimating scattering parameters, such as airlight, and a scattering coefficient, which are required for our dehazing cost volume. The output depth of a network with our dehazing cost volume can be regarded as a function of these parameters; thus, they are geometrically optimized with a sparse 3D point cloud obtained at a structure-from-motion step. Experimental results on synthesized hazy images indicate the effectiveness of our dehazing cost volume against the ordinary cost volume regarding scattering media. We also demonstrated the applicability of our dehazing cost volume to real foggy scenes.
翻译:我们建议一种基于学习的多视立体法,用于散射介质,如雾或烟,其成本量是新颖的,称为去除成本体积。散射介质中捕获的图像由于悬浮粒子造成的光散射和衰减而退化。这种降解取决于现场的深度;因此,传统的光度测定方法难以评价光度一致性,因为三维(3D)重建之前的深度是未知的。去除成本体积可以通过在成本体积中利用被冲飞机计算散射效应来解决深度估计和图像恢复的鸡和鸡的问题。我们还提议一种估计散射参数的方法,如空气光和散射系数,这是我们消散成本体积所需的。一个具有我们消散成本体积的网络的输出深度可被视为这些参数的函数;因此,它们以地理上的最佳方式优化了在结构向下移动一步取得的稀薄的三维点云。合成防云层图像的实验结果表明,我们的成本降低成本体积相对于普通的雾层的可承受性。我们还展示了在散介层上的成本体积的可承受性。