The imbalance problem is widespread in the field of machine learning, which also exists in multimodal learning areas caused by the intrinsic discrepancy between modalities of samples. Recent works have attempted to solve the modality imbalance problem from algorithm perspective, however, they do not fully analyze the influence of modality bias in datasets. Concretely, existing multimodal datasets are usually collected under specific tasks, where one modality tends to perform better than other ones in most conditions. In this work, to comprehensively explore the influence of modality bias, we first split existing datasets into different subsets by estimating sample-wise modality discrepancy. We surprisingly find that: the multimodal models with existing imbalance algorithms consistently perform worse than the unimodal one on specific subsets, in accordance with the modality bias. To further explore the influence of modality bias and analyze the effectiveness of existing imbalance algorithms, we build a balanced audiovisual dataset, with uniformly distributed modality discrepancy over the whole dataset. We then conduct extensive experiments to re-evaluate existing imbalance algorithms and draw some interesting findings: existing algorithms only provide a compromise between modalities and suffer from the large modality discrepancy of samples. We hope that these findings could facilitate future research on the modality imbalance problem.
翻译:机械学习领域普遍存在不平衡问题,由于抽样方法之间的内在差异,也存在于多式学习领域。最近的工作试图从算法角度解决模式不平衡问题,但是,它们没有完全分析数据集中模式偏差的影响。具体地说,现有的多式数据集通常在具体任务下收集,在多数情况下,一种模式往往比其他模式效果更好。在这项工作中,为了全面探讨模式偏差的影响,我们首先通过估计抽样方法的差异,将现有数据集分成不同的子集。我们令人惊讶地发现:现有不平衡算法的多式联运模型在具体子集上的表现总是比单式模型更差,但根据模式偏差,为了进一步探讨模式偏差的影响,分析现有不平衡算法的有效性,我们建立一个平衡的视听数据集,在整个条件下统一分布模式的差异。然后我们进行广泛的试验,重新评价现有的不平衡算法,并得出一些有趣的结论:现有的算法只能为模式提供妥协,并且由于模式差异很大而受模式差异的影响。我们希望这些结论能够促进今后对模式不平衡问题的研究。