Multimodal fusion can make semantic segmentation more robust. However, fusing an arbitrary number of modalities remains underexplored. To delve into this problem, we create the DeLiVER arbitrary-modal segmentation benchmark, covering Depth, LiDAR, multiple Views, Events, and RGB. Aside from this, we provide this dataset in four severe weather conditions as well as five sensor failure cases to exploit modal complementarity and resolve partial outages. To make this possible, we present the arbitrary cross-modal segmentation model CMNeXt. It encompasses a Self-Query Hub (SQ-Hub) designed to extract effective information from any modality for subsequent fusion with the RGB representation and adds only negligible amounts of parameters (~0.01M) per additional modality. On top, to efficiently and flexibly harvest discriminative cues from the auxiliary modalities, we introduce the simple Parallel Pooling Mixer (PPX). With extensive experiments on a total of six benchmarks, our CMNeXt achieves state-of-the-art performance on the DeLiVER, KITTI-360, MFNet, NYU Depth V2, UrbanLF, and MCubeS datasets, allowing to scale from 1 to 81 modalities. On the freshly collected DeLiVER, the quad-modal CMNeXt reaches up to 66.30% in mIoU with a +9.10% gain as compared to the mono-modal baseline. The DeLiVER dataset and our code are at: https://jamycheung.github.io/DELIVER.html.
翻译:多式混凝土可以使语义分解更加稳健。 但是, 任意的跨式分解模型 CMNEXt 仍然未得到充分探讨。 为了深入探讨这一问题, 我们创建了DeLiver任意模式分解基准, 包括深度、 liDAR、 多重观点、 事件和 RGB 。 除此之外, 我们还在四个恶劣的气候条件下提供这一数据集, 以及五个传感器失灵案例, 以利用模式互补和部分断流。 为了做到这一点, 我们展示了任意的跨式分解模型 CMNEXt 。 它包含一个自慰枢纽( SQ-Hub), 旨在从任何随后与 RGB 代表的混合模式中提取有效信息, 并且每额外增加少量参数( ~ 0.01MM ) 。 此外, 为了高效和灵活地获取辅助模式的歧视性提示, 我们引入了简单的平行聚变压器( PPX) 。 通过在六个基准中进行广泛的实验, 我们的 CMNEXt 和 Rio- lial- lial- laxal lax lax lax- cal lax- cal lax- cal lax- cal- cal lax- cal- sal- cal- cal- sal- sal- sleval- sal- slational- slationaldald- divational laveal- slationaldrod- sal- sal- sald- dabledaldalveald- drod- dationaldaldald- drod- dationald- drodaldaldald- dated- dations.</s>