Recent works on 3D semantic segmentation propose to exploit the synergy between images and point clouds by processing each modality with a dedicated network and projecting learned 2D features onto 3D points. Merging large-scale point clouds and images raises several challenges, such as constructing a mapping between points and pixels, and aggregating features between multiple views. Current methods require mesh reconstruction or specialized sensors to recover occlusions, and use heuristics to select and aggregate available images. In contrast, we propose an end-to-end trainable multi-view aggregation model leveraging the viewing conditions of 3D points to merge features from images taken at arbitrary positions. Our method can combine standard 2D and 3D networks and outperforms both 3D models operating on colorized point clouds and hybrid 2D/3D networks without requiring colorization, meshing, or true depth maps. We set a new state-of-the-art for large-scale indoor/outdoor semantic segmentation on S3DIS (74.7 mIoU 6-Fold) and on KITTI-360 (58.3 mIoU). Our full pipeline is accessible at https://github.com/drprojects/DeepViewAgg, and only requires raw 3D scans and a set of images and poses.
翻译:3D 语义区段的近期工作提议利用图像和点云之间的协同作用,方法是以专门的网络处理每种模式,并将所学的 2D 特征投射到 3D 点点上。 合并大比例点云和图像带来若干挑战,例如在点和像素之间绘制地图,在多重观点之间汇集特征。 目前的方法需要网状重建或专门传感器,以恢复分层,并使用超光速感应器选择和综合现有图像。 相比之下,我们提议了一个可以利用3D 点的观测条件将3D 点的观测条件合并到任意位置上图像的组合。 我们的方法可以将标准 2D 和 3D 网络合并, 并超越在色化点云和 混合 2D/3D 网络上运行的3D 模型, 而不需要颜色化、 网状或真实深度地图。 我们为大规模室内/室内/外语系图像设置了一个新的状态- 。 我们在S3DIS (74.7 mIO 6-Fold) 和 KITTI-360 (58.3 mIO) 和 完整版本图像需要一个完整的版本。