Multimodal sentiment analysis is a trending area of research, and the multimodal fusion is one of its most active topic. Acknowledging humans communicate through a variety of channels (i.e visual, acoustic, linguistic), multimodal systems aim at integrating different unimodal representations into a synthetic one. So far, a consequent effort has been made on developing complex architectures allowing the fusion of these modalities. However, such systems are mainly trained by minimising simple losses such as $L_1$ or cross-entropy. In this work, we investigate unexplored penalties and propose a set of new objectives that measure the dependency between modalities. We demonstrate that our new penalties lead to a consistent improvement (up to $4.3$ on accuracy) across a large variety of state-of-the-art models on two well-known sentiment analysis datasets: \texttt{CMU-MOSI} and \texttt{CMU-MOSEI}. Our method not only achieves a new SOTA on both datasets but also produces representations that are more robust to modality drops. Finally, a by-product of our methods includes a statistical network which can be used to interpret the high dimensional representations learnt by the model.
翻译:多式情绪分析是一个趋势的研究领域,而多式融合是其最活跃的主题之一。承认人类通过各种渠道(即视觉、声学、语言、多式系统)进行交流,将不同的单一形式表述纳入合成的系统。到目前为止,已经努力开发复杂的结构,使这些模式能够融合在一起。然而,这种系统主要是通过尽量减少简单的损失,如1美元或交叉消耗来培训的。在这项工作中,我们调查尚未探讨的惩罚,并提出一套新目标,以衡量各种模式之间的依赖性。我们的新惩罚导致在多种最先进的模式上不断改进(准确度上高达4.3美元)。最后,我们的方法的一个副产品包括一个统计模型,它用来解释高维度模型。