Semantic scene understanding is crucial for robust and safe autonomous navigation, particularly so in off-road environments. Recent deep learning advances for 3D semantic segmentation rely heavily on large sets of training data, however existing autonomy datasets either represent urban environments or lack multimodal off-road data. We fill this gap with RELLIS-3D, a multimodal dataset collected in an off-road environment, which contains annotations for 13,556 LiDAR scans and 6,235 images. The data was collected on the Rellis Campus of Texas A\&M University and presents challenges to existing algorithms related to class imbalance and environmental topography. Additionally, we evaluate the current state-of-the-art deep learning semantic segmentation models on this dataset. Experimental results show that RELLIS-3D presents challenges for algorithms designed for segmentation in urban environments. This novel dataset provides the resources needed by researchers to continue to develop more advanced algorithms and investigate new research directions to enhance autonomous navigation in off-road environments. RELLIS-3D is available at https://github.com/unmannedlab/RELLIS-3D
翻译:语义场景理解对于稳健和安全的自主导航至关重要,特别是在非公路环境中。最近3D语义区段的深层次学习进展在很大程度上依赖于大量的培训数据,而现有的自主数据集要么代表城市环境,要么缺乏多式联运离岸数据。我们用在离岸环境中收集的多式数据集RELLIS-3D填补了这一差距,该数据集包含13 556 LiDAR扫描图和6 235图像。这些数据是在得克萨斯州A ⁇ M大学Rellis校园收集的,对与阶级不平衡和环境地形有关的现有算法提出了挑战。此外,我们评估该数据集目前最先进的深层次学习语义区段段模式。实验结果显示,RELLIS-3D对设计城市环境分离的算法提出了挑战。这个新数据集为研究人员继续开发更先进的算法和调查新的研究方向以加强离岸环境中的自主导航提供了必要的资源。RELLIS-3D可在https://github.com/unmannlab/RELLIS-3D上查阅。