Recent research has shown the effectiveness of mmWave radar sensing for object detection in low visibility environments, which makes it an ideal technique in autonomous navigation systems. In this paper, we introduce Radar to Point Cloud (R2P), a deep learning model that generates smooth, dense, and highly accurate point cloud representation of a 3D object with fine geometry details, based on rough and sparse point clouds with incorrect points obtained from mmWave radar. These input point clouds are converted from the 2D depth images that are generated from raw mmWave radar sensor data, characterized by inconsistency, and orientation and shape errors. R2P utilizes an architecture of two sequential deep learning encoder-decoder blocks to extract the essential features of those radar-based input point clouds of an object when observed from multiple viewpoints, and to ensure the internal consistency of a generated output point cloud and its accurate and detailed shape reconstruction of the original object. We implement R2P to replace Stage 2 of our recently proposed 3DRIMR (3D Reconstruction and Imaging via mmWave Radar) system. Our experiments demonstrate the significant performance improvement of R2P over the popular existing methods such as PointNet, PCN, and the original 3DRIMR design.
翻译:最近的研究显示,在低可见度环境中进行天体探测的毫米Wave雷达感测是Mmm Wave雷达的功效,这使它成为自主导航系统的一种理想技术。在本文件中,我们采用了雷达到点云(R2P)这一深层次学习模型,它以粗糙和稀疏的点云为基础,以微小点云为基础,用毫米Wave雷达得出不正确的点点。这些输入点云是从以不一致、方向错误和形状错误为特征的原始毫米Wave雷达感测数据生成的2D深度图像转换而来的。R2P利用两个连续深层学习的编码解码器-解码器区块的结构,从多个角度观测时提取一个天体基于雷达的输入点云的基本特征,确保产生的输出点云的内部一致性及其精确和详细的原始物体形状重建。我们实施了R2P,以取代我们最近提议的3DRIMR(3D)的阶段2,其特征是用毫米Wave雷达系统。我们的实验表明,R2P在诸如点Net、PCN和原始设计3DRMR。