Autonomous robotic systems and self driving cars rely on accurate perception of their surroundings as the safety of the passengers and pedestrians is the top priority. Semantic segmentation is one the essential components of environmental perception that provides semantic information of the scene. Recently, several methods have been introduced for 3D LiDAR semantic segmentation. While, they can lead to improved performance, they are either afflicted by high computational complexity, therefore are inefficient, or lack fine details of smaller instances. To alleviate this problem, we propose AF2-S3Net, an end-to-end encoder-decoder CNN network for 3D LiDAR semantic segmentation. We present a novel multi-branch attentive feature fusion module in the encoder and a unique adaptive feature selection module with feature map re-weighting in the decoder. Our AF2-S3Net fuses the voxel based learning and point-based learning into a single framework to effectively process the large 3D scene. Our experimental results show that the proposed method outperforms the state-of-the-art approaches on the large-scale SemanticKITTI benchmark, ranking 1st on the competitive public leaderboard competition upon publication.
翻译:自主机器人系统和自驾驶汽车取决于对周围环境的准确认识,因为乘客和行人的安全是首要优先事项。语义分割是环境认知的基本组成部分之一,提供现场语义信息。最近,为3D LiDAR 语义分割引入了几种方法。它们可以提高性能,但它们要么受到高计算复杂性的困扰,因此效率低下,要么缺乏小实例的细微细节。为了缓解这一问题,我们提议AF2-S3Net,即3DAR 语义分割的终端到终端编码器脱钩CNN网络。我们在编码器中展示了一个新的多分支注意特征融合模块和一个独特的适应性特征选择模块,其特点在解码器中重新标注。我们的AF2-S3Net将基于学习和点学习的 voxel 整合成一个单一的框架,以有效处理大型3D 场景。我们的实验结果表明,拟议方法在大规模Smantictical 上超越了公共竞争领袖排名。