This paper explores a machine learning approach for generating high resolution point clouds from a single-chip mmWave radar. Unlike lidar and vision-based systems, mmWave radar can operate in harsh environments and see through occlusions like smoke, fog, and dust. Unfortunately, current mmWave processing techniques offer poor spatial resolution compared to lidar point clouds. This paper presents RadarHD, an end-to-end neural network that constructs lidar-like point clouds from low resolution radar input. Enhancing radar images is challenging due to the presence of specular and spurious reflections. Radar data also doesn't map well to traditional image processing techniques due to the signal's sinc-like spreading pattern. We overcome these challenges by training RadarHD on a large volume of raw I/Q radar data paired with lidar point clouds across diverse indoor settings. Our experiments show the ability to generate rich point clouds even in scenes unobserved during training and in the presence of heavy smoke occlusion. Further, RadarHD's point clouds are high-quality enough to work with existing lidar odometry and mapping workflows.
翻译:本文探索了一种机器学习方法,用单立方毫米Wave雷达生成高分辨率点云。与利达尔和基于视觉的系统不同,毫米Wave雷达可以在严酷的环境中运行,通过烟雾、雾和尘埃等隔离观察。 不幸的是,目前毫米Wave处理技术与利达点云相比,空间分辨率差。本文展示了雷达HD,这是一个端至端神经网络,从低分辨率雷达输入中构建类似利达尔点云。由于有视觉反射和虚假反射,增强雷达图像具有挑战性。雷达数据也无法很好地映射传统图像处理技术,因为信号的反射模式类似怪异。我们克服了这些挑战,通过培训雷达HD,将大量的原始I/Q雷达数据与不同室内环境的利达尔点云搭配在一起。我们的实验显示即使在训练期间没有观测到的场景点云和有严重烟雾隐蔽的场景中也能产生丰富的点云。此外,雷达HCD的点云质量也足以与现有的里雷达测量和绘图工作流程相结合。