Learning multi-modal representations is an essential step towards real-world robotic applications, and various multi-modal fusion models have been developed for this purpose. However, we observe that existing models, whose objectives are mostly based on joint training, often suffer from learning inferior representations of each modality. We name this problem Modality Failure, and hypothesize that the imbalance of modalities and the implicit bias of common objectives in fusion method prevent encoders of each modality from sufficient feature learning. To this end, we propose a new multi-modal learning method, Uni-Modal Teacher, which combines the fusion objective and uni-modal distillation to tackle the modality failure problem. We show that our method not only drastically improves the representation of each modality, but also improves the overall multi-modal task performance. Our method can be effectively generalized to most multi-modal fusion approaches. We achieve more than 3% improvement on the VGGSound audio-visual classification task, as well as improving performance on the NYU depth V2 RGB-D image segmentation task.
翻译:然而,我们注意到,主要以联合培训为基础的现有模式,其目标往往因学习每种模式的低劣表现而受到损害。我们给出了这一问题的模式失败,并假设,在融合方法中,模式的不平衡和共同目标的隐含偏差使每种模式的编码者无法充分学习特征。为此,我们建议采用新的多模式学习方法,即Uni-Modal 教师,将聚合目标和单式蒸馏相结合,以解决模式故障问题。我们表明,我们的方法不仅极大地改进了每一种模式的代表性,而且还改善了总体的多式任务绩效。我们的方法可以有效地推广到大多数多式融合方法。我们对于VGGSound视听分类任务取得了3%以上的改进,同时提高了NYU深度V2 RGB-D图像分割任务的业绩。