With social media being a major force in information consumption, accelerated propagation of fake news has presented new challenges for platforms to distinguish between legitimate and fake news. Effective fake news detection is a non-trivial task due to the diverse nature of news domains and expensive annotation costs. In this work, we address the limitations of existing automated fake news detection models by incorporating auxiliary information (e.g., user comments and user-news interactions) into a novel reinforcement learning-based model called \textbf{RE}inforced \textbf{A}daptive \textbf{L}earning \textbf{F}ake \textbf{N}ews \textbf{D}etection (REAL-FND). REAL-FND exploits cross-domain and within-domain knowledge that makes it robust in a target domain, despite being trained in a different source domain. Extensive experiments on real-world datasets illustrate the effectiveness of the proposed model, especially when limited labeled data is available in the target domain.
翻译:由于社交媒体是信息消费的主要力量,加速传播假新闻为区分合法和假新闻的平台带来了新的挑战。有效的假新闻检测是一项非三重任务,因为新闻领域的性质多种多样,而且注释成本昂贵。在这项工作中,我们解决了现有自动假新闻检测模式的局限性,将辅助信息(例如用户评论和用户-新闻互动)纳入一个新型强化学习模式,称为\ textbf{RE}强制的\ textbf{A}daptition \ textbf{L}n学\ textbf{F}F}ake\ textbf{F}F}ke\ textbf{N}{N}wys\ textbf{D}tection (REAL-FND)) 。 真实-FND 开发了跨域和内部知识,使其在目标领域具有活力,尽管在不同的来源领域接受了培训。 真实世界数据集的广泛实验表明了拟议模式的有效性,特别是在目标领域有有限的标签数据的情况下。