A reliable perception has to be robust against challenging environmental conditions. Therefore, recent efforts focused on the use of radar sensors in addition to camera and lidar sensors for perception applications. However, the sparsity of radar point clouds and the poor data availability remain challenging for current perception methods. To address these challenges, a novel graph neural network is proposed that does not just use the information of the points themselves but also the relationships between the points. The model is designed to consider both point features and point-pair features, embedded in the edges of the graph. Furthermore, a general approach for achieving transformation invariance is proposed which is robust against unseen scenarios and also counteracts the limited data availability. The transformation invariance is achieved by an invariant data representation rather than an invariant model architecture, making it applicable to other methods. The proposed RadarGNN model outperforms all previous methods on the RadarScenes dataset. In addition, the effects of different invariances on the object detection and semantic segmentation quality are investigated. The code is made available as open-source software under https://github.com/TUMFTM/RadarGNN.
翻译:可靠的感知必须对具有挑战性的环境条件具有鲁棒性。因此,最近的努力集中在使用雷达传感器进行感知应用,除了相机和激光雷达传感器之外。然而,雷达点云的稀疏性和数据可用性不佳,仍然是当前感知方法面临的挑战。为了应对这些挑战,提出了一种新颖的图神经网络,它不仅使用点本身的信息,还使用点之间的关系。该模型旨在考虑点特征和点对特征,嵌入在图的边缘中。此外,提出了一种实现平移不变性的通用方法,该方法对未知情况具有鲁棒性,并抵消了有限的数据可用性。通过一种不变的数据表示实现平移不变性,而不是不变的模型架构,使其适用于其他方法。所提出的RadarGNN模型在RadarScenes数据集上优于以前的所有方法。此外,还研究了不同不变性对目标检测和语义分割质量的影响。该代码已经开源,可以在https://github.com/TUMFTM/RadarGNN上获得。