Automated fact checking systems have been proposed that quickly provide veracity prediction at scale to mitigate the negative influence of fake news on people and on public opinion. However, most studies focus on veracity classifiers of those systems, which merely predict the truthfulness of news articles. We posit that effective fact checking also relies on people's understanding of the predictions. In this paper, we propose elucidating fact checking predictions using counterfactual explanations to help people understand why a specific piece of news was identified as fake. In this work, generating counterfactual explanations for fake news involves three steps: asking good questions, finding contradictions, and reasoning appropriately. We frame this research question as contradicted entailment reasoning through question answering (QA). We first ask questions towards the false claim and retrieve potential answers from the relevant evidence documents. Then, we identify the most contradictory answer to the false claim by use of an entailment classifier. Finally, a counterfactual explanation is created using a matched QA pair with three different counterfactual explanation forms. Experiments are conducted on the FEVER dataset for both system and human evaluations. Results suggest that the proposed approach generates the most helpful explanations compared to state-of-the-art methods.
翻译:自动事实检查系统建议快速提供规模的真实性预测,以减轻假消息对人和公众舆论的负面影响。然而,大多数研究侧重于这些系统的真实性分类,仅仅预测新闻文章的真实性。我们假设,有效的事实检查还取决于人们对预测的理解。在本文中,我们提议用反事实解释来澄清事实检查预测,以帮助人们理解为什么某项特定新闻被确定为假消息。在这项工作中,为假新闻提供反事实解释涉及三个步骤:提出良好的问题,找出矛盾和恰当的推理。我们把这一研究问题作为这些系统的真实性分类,只是通过回答问题(QA)来说明这些系统的真实性分类。我们首先对虚假要求提出问题,并从相关证据文件中检索可能的答案。然后,我们通过使用隐含分析仪找出对虚假要求的最矛盾的答案。最后,用配对的QA对和三种不同的反事实解释表来作出反事实解释。在FEWER数据集上进行了实验,以两种方式进行人类评价。结果表明,拟议的方法与州法方法相比最有帮助的解释。