Dense depth map capture is challenging in existing active sparse illumination based depth acquisition techniques, such as LiDAR. Various techniques have been proposed to estimate a dense depth map based on fusion of the sparse depth map measurement with the RGB image. Recent advances in hardware enable adaptive depth measurements resulting in further improvement of the dense depth map estimation. In this paper, we study the topic of estimating dense depth from depth sampling. The adaptive sparse depth sampling network is jointly trained with a fusion network of an RGB image and sparse depth, to generate optimal adaptive sampling masks. We show that such adaptive sampling masks can generalize well to many RGB and sparse depth fusion algorithms under a variety of sampling rates (as low as $0.0625\%$). The proposed adaptive sampling method is fully differentiable and flexible to be trained end-to-end with upstream perception algorithms.
翻译:在诸如LiDAR等现有活性稀薄的基于照明的深度采集技术中,测得深度图具有挑战性。建议采用各种技术,根据稀薄的深度测量与RGB图像混合,估计密密密深度图。硬件方面的最新进展使得适应性深度测量能够进一步改进密密深图估计。在本文中,我们研究了从深度取样中估计密密深度的专题。适应性稀薄的深度取样网络经过一个RGB图像和稀薄深度的集成网络的联合培训,以产生最佳的适应性取样面具。我们表明,这种适应性取样面具可以按照各种取样率(低至0.0625美元)向许多RGB和稀稀薄深度聚变算法进行广泛概括。拟议的适应性取样方法完全不同,而且灵活,可以与上游认知算法培训端到端。