The rapid proliferation of online misinformation poses significant risks to public trust, policy, and safety, necessitating reliable automated fake news detection. Existing methods often struggle with multimodal content, domain generalization, and explainability. We propose AMPEND-LS, an agentic multi-persona evidence-grounded framework with LLM-SLM synergy for multimodal fake news detection. AMPEND-LS integrates textual, visual, and contextual signals through a structured reasoning pipeline powered by LLMs, augmented with reverse image search, knowledge graph paths, and persuasion strategy analysis. To improve reliability, we introduce a credibility fusion mechanism combining semantic similarity, domain trustworthiness, and temporal context, and a complementary SLM classifier to mitigate LLM uncertainty and hallucinations. Extensive experiments across three benchmark datasets demonstrate that AMPEND-LS consistently outperformed state-of-the-art baselines in accuracy, F1 score, and robustness. Qualitative case studies further highlight its transparent reasoning and resilience against evolving misinformation. This work advances the development of adaptive, explainable, and evidence-aware systems for safeguarding online information integrity.


翻译:在线虚假信息的快速扩散对公众信任、政策制定及安全构成了重大风险,亟需可靠的自动化虚假新闻检测方法。现有方法在处理多模态内容、领域泛化及可解释性方面常面临困难。我们提出了AMPEND-LS,一种基于证据的多角色智能体框架,利用大语言模型与小语言模型的协同作用进行多模态虚假新闻检测。AMPEND-LS通过大语言模型驱动的结构化推理流程,整合文本、视觉及上下文信号,并辅以反向图像搜索、知识图谱路径及说服策略分析。为提高可靠性,我们引入了融合语义相似度、领域可信度及时间上下文的可信度融合机制,以及一个互补的小语言模型分类器以减轻大语言模型的不确定性与幻觉。在三个基准数据集上的大量实验表明,AMPEND-LS在准确率、F1分数及鲁棒性方面均持续优于现有最先进的基线方法。定性案例研究进一步凸显了其透明的推理过程及对不断演变的虚假信息的抵御能力。本研究推动了自适应、可解释且具备证据感知能力的在线信息完整性保障系统的发展。

0
下载
关闭预览
Top
微信扫码咨询专知VIP会员