Our goal is to develop stable, accurate, and robust semantic scene understanding methods for wide-area scene perception and understanding, especially in challenging outdoor environments. To achieve this, we are exploring and evaluating a range of related technology and solutions, including AI-driven multimodal scene perception, fusion, processing, and understanding. This work reports our efforts on the evaluation of a state-of-the-art approach for semantic segmentation with multiple RGB and depth sensing data. We employ four large datasets composed of diverse urban and terrain scenes and design various experimental methods and metrics. In addition, we also develop new strategies of multi-datasets learning to improve the detection and recognition of unseen objects. Extensive experiments, implementations, and results are reported in the paper.
翻译:我们的目标是开发稳定、准确和稳健的语义场景理解方法,以了解和理解广域场景,特别是在具有挑战性的户外环境中。为此,我们正在探索和评价一系列相关技术和解决方案,包括AI驱动的多式联运场景认知、聚合、处理和理解。本工作报告了我们利用多种RGB和深度遥感数据对最先进的语义分解方法进行评估的工作。我们采用了四个大型数据集,由不同的城市和地形场景组成,设计了各种实验方法和指标。此外,我们还制定了多数据集学习的新战略,以改进对看不见物体的探测和识别。文件报告了广泛的实验、实施和结果。