Vision and language models (VL) are known to exploit unrobust indicators in individual modalities (e.g., introduced by distributional biases), instead of focusing on relevant information in each modality. A small drop in accuracy obtained on a VL task with a unimodal model suggests that so-called unimodal collapse occurred. But how to quantify the amount of unimodal collapse reliably, at dataset and instance-level, to diagnose and combat unimodal collapse in a targeted way? We present MM-SHAP, a performance-agnostic multimodality score that quantifies the proportion by which a model uses individual modalities in multimodal tasks. MM-SHAP is based on Shapley values and will be applied in two ways: (1) to compare models for their degree of multimodality, and (2) to measure the contribution of individual modalities for a given task and dataset. Experiments with 6 VL models -- LXMERT, CLIP and four ALBEF variants -- on four VL tasks highlight that unimodal collapse can occur to different degrees and in different directions, contradicting the wide-spread assumption that unimodal collapse is one-sided. We recommend MM-SHAP for analysing multimodal tasks, to diagnose and guide progress towards multimodal integration. Code available at: https://github.com/Heidelberg-NLP/MM-SHAP
翻译:已知的愿景和语言模型(VL)是利用单个模式(例如,通过分布偏差引入的)的不严酷指标,而不是侧重于每种模式中的有关信息。在使用单一模式模型的VL任务中,获得的精度略有下降,表明发生了所谓的单一模式崩溃。但是,如何在数据集和实例一级可靠地量化单方式崩溃的数量,以有针对性地诊断和打击单方式崩溃?我们介绍了MM-SHAP,一个业绩-不可知的多式联运分数,该分数可以量化模式在多式联运任务中使用单个模式的比例。MM-SHAP基于乏味值,并将以两种方式应用:(1)比较模式的多模式程度,以及(2)衡量单个模式对特定任务和数据集的贡献。在四种VLMMMET、CLIP和ALBEF变式中进行实验,突出表明,单方式崩溃可以在不同程度和不同方向上发生,这与广度假设MM/NHAP在MAS/MAR/MLS上分析单模式崩溃情况。