Recent years have witnessed remarkable progress towards computational fake news detection. To mitigate its negative impact, we argue that it is critical to understand what user attributes potentially cause users to share fake news. The key to this causal-inference problem is to identify confounders -- variables that cause spurious associations between treatments (e.g., user attributes) and outcome (e.g., user susceptibility). In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities. Learning such user behavior is typically subject to selection bias in users who are susceptible to share news on social media. Drawing on causal inference theories, we first propose a principled approach to alleviating selection bias in fake news dissemination. We then consider the learned unbiased fake news sharing behavior as the surrogate confounder that can fully capture the causal links between user attributes and user susceptibility. We theoretically and empirically characterize the effectiveness of the proposed approach and find that it could be useful in protecting society from the perils of fake news.
翻译:近年来,在计算假新闻探测方面取得了显著进展。为了减轻其负面影响,我们认为,理解用户的属性可能使用户分享假新闻至关重要。这一因果推断问题的关键在于找出混淆者 -- -- 导致治疗(如用户属性)和结果(如用户易感性)之间虚假关联的变量。在虚假的新闻传播中,混淆者的特点可以是与用户属性和在线活动内在相关的假新闻分享行为。学习这种用户行为通常会受到容易在社交媒体上分享新闻的用户选择偏见的影响。根据因果关系推论,我们首先提出原则性办法,以减轻在虚假新闻传播中选择偏见。我们随后将所学的不带偏见的假新闻分享行为视为可以充分捕捉用户属性和用户易感性之间因果关系的隐蔽者。我们从理论上和实学上描述拟议方法的有效性,并发现它有助于保护社会免受假新闻的危险。