Humans express their opinions and emotions through multiple modalities which mainly consist of textual, acoustic and visual modalities. Prior works on multimodal sentiment analysis mostly apply Recurrent Neural Network (RNN) to model aligned multimodal sequences. However, it is unpractical to align multimodal sequences due to different sample rates for different modalities. Moreover, RNN is prone to the issues of gradient vanishing or exploding and it has limited capacity of learning long-range dependency which is the major obstacle to model unaligned multimodal sequences. In this paper, we introduce Graph Capsule Aggregation (GraphCAGE) to model unaligned multimodal sequences with graph-based neural model and Capsule Network. By converting sequence data into graph, the previously mentioned problems of RNN are avoided. In addition, the aggregation capability of Capsule Network and the graph-based structure enable our model to be interpretable and better solve the problem of long-range dependency. Experimental results suggest that GraphCAGE achieves state-of-the-art performance on two benchmark datasets with representations refined by Capsule Network and interpretation provided.
翻译:人类通过多种方式表达自己的意见和情感,这些方式主要包括文字、声学和视觉模式。以前关于多式联运情绪的分析大多采用经常性神经网络(RNN)来模拟统一的多式联运序列。然而,由于不同模式的抽样率不同,因此统一多式联运序列是不切实际的。此外,RNN容易出现梯度消失或爆炸问题,学习长距离依赖能力的能力有限,这是模型不匹配的多式联运序列的主要障碍。在本文件中,我们采用Greab Capsule聚合(GraphCAGE)来模拟以图形为基础的神经模型和Capsule网络的不匹配多式联运序列。通过将序列数据转换成图表,避免了先前提到的RNNN的问题。此外,Capsule网络和图形结构的综合能力使我们的模型能够被解释,更好地解决长距离依赖问题。实验结果表明,GreagCAGE在两个基准数据集上取得了最新的业绩,由Capsule网络改进并提供了解释。