Nowadays, with the explosive growth of multimodal reviews on social media platforms, multimodal sentiment analysis has recently gained popularity because of its high relevance to these social media posts. Although most previous studies design various fusion frameworks for learning an interactive representation of multiple modalities, they fail to incorporate sentimental knowledge into inter-modality learning. This paper proposes a Multi-channel Attentive Graph Convolutional Network (MAGCN), consisting of two main components: cross-modality interactive learning and sentimental feature fusion. For cross-modality interactive learning, we exploit the self-attention mechanism combined with densely connected graph convolutional networks to learn inter-modality dynamics. For sentimental feature fusion, we utilize multi-head self-attention to merge sentimental knowledge into inter-modality feature representations. Extensive experiments are conducted on three widely-used datasets. The experimental results demonstrate that the proposed model achieves competitive performance on accuracy and F1 scores compared to several state-of-the-art approaches.
翻译:目前,随着社会媒体平台多式审查的爆炸性增长,最近多式情绪分析因其与这些社交媒体文章的高度相关性而变得日益受欢迎。尽管大多数前几次研究设计了各种融合框架,以学习多种模式的交互式表现,但它们没有将情感知识纳入跨模式学习。本文建议建立一个多渠道动态图集集网络(MAGCN),由两个主要部分组成:跨模式互动学习和情感特征融合。对于跨模式互动学习,我们利用自留机制,结合与密集连通的图形共流网络,学习跨模式动态。对于情感特征融合,我们利用多头型自我意识将情感知识纳入跨模式特征表现。对三大广泛使用的数据集进行了广泛的实验。实验结果显示,与一些最先进的方法相比,拟议的模型在准确性和F1分上取得了竞争性的成绩。