The standard paradigm for fake news detection mainly utilizes text information to model the truthfulness of news. However, the discourse of online fake news is typically subtle and it requires expert knowledge to use textual information to debunk fake news. Recently, studies focusing on multimodal fake news detection have outperformed text-only methods. Recent approaches utilizing the pre-trained model to extract unimodal features, or fine-tuning the pre-trained model directly, have become a new paradigm for detecting fake news. Again, this paradigm either requires a large number of training instances, or updates the entire set of pre-trained model parameters, making real-world fake news detection impractical. Furthermore, traditional multimodal methods fuse the cross-modal features directly without considering that the uncorrelated semantic representation might inject noise into the multimodal features. This paper proposes a Similarity-Aware Multimodal Prompt Learning (SAMPLE) framework. First, we incorporate prompt learning into multimodal fake news detection. Prompt learning, which only tunes prompts with a frozen language model, can reduce memory usage significantly and achieve comparable performances, compared with fine-tuning. We analyse three prompt templates with a soft verbalizer to detect fake news. In addition, we introduce the similarity-aware fusing method to adaptively fuse the intensity of multimodal representation and mitigate the noise injection via uncorrelated cross-modal features. For evaluation, SAMPLE surpasses the F1 and the accuracies of previous works on two benchmark multimodal datasets, demonstrating the effectiveness of the proposed method in detecting fake news. In addition, SAMPLE also is superior to other approaches regardless of few-shot and data-rich settings.
翻译:标准的假新闻检测范式主要利用文本信息来建模新闻的真实性。然而,在线假新闻的话语通常是微妙的,需要专业知识才能利用文本信息驳斥假新闻。最近,专注于多模态假新闻检测的研究已经超过了仅使用文本的方法。最近一些使用预训练模型提取单模态特征或直接微调预训练模型的方法已成为检测假新闻的新范例。再次强调,此范例要么需要大量的训练实例,要么需要更新全部预训练模型参数,使真实世界的假新闻检测不可行。此外,传统的多模态方法直接融合跨模态特征,而不考虑不相关的语义表示可能会为多模态特征注入噪声。本文提出了一种基于相似度感知的多模态提示学习(SAMPLE)框架。首先,我们将提示学习引入多模态假新闻检测。提示学习只通过冻结语言模型来调整提示,可以显著减少内存使用,并实现与微调相当的性能。我们分析了三种带有软说明符的提示模板来检测假新闻。此外,我们引入了相似度感知融合方法,以自适应地融合多模态表示的强度,并通过不相关的跨模态特征减轻噪声注入。在评估方面,SAMPLE超过了先前工作的F1和准确性,展示了提出的方法在检测假新闻方面的有效性。此外,无论是少量样本还是数据丰富的情况下,SAMPLE都优于其他方法。