We integrate sparse radar data into a monocular depth estimation model and introduce a novel preprocessing method for reducing the sparseness and limited field of view provided by radar. We explore the intrinsic error of different radar modalities and show our proposed method results in more data points with reduced error. We further propose a novel method for estimating dense depth maps from monocular 2D images and sparse radar measurements using deep learning based on the deep ordinal regression network by Fu et al. Radar data are integrated by first converting the sparse 2D points to a height-extended 3D measurement and then including it into the network using a late fusion approach. Experiments are conducted on the nuScenes dataset. Our experiments demonstrate state-of-the-art performance in both day and night scenes.
翻译:我们将稀有雷达数据纳入单眼深度估计模型,并采用新的预处理方法减少雷达提供的稀少和有限视野。我们探索了不同雷达模式的内在错误,并展示了我们所提议的方法在更多数据点中得出的结果,减少了错误。我们进一步提议了一种新的方法,利用Fu等人基于深奥回归网络的深层学习,从单眼2D图像和稀有雷达测量中估算密度深度地图。雷达数据通过首先将稀有的2D点转换为高度延伸的3D测量,然后使用迟聚方法将其纳入网络。实验是在核星数据集上进行的。我们的实验展示了昼夜两场的最新表现。