Misleading or false information has been creating chaos in some places around the world. To mitigate this issue, many researchers have proposed automated fact-checking methods to fight the spread of fake news. However, most methods cannot explain the reasoning behind their decisions, failing to build trust between machines and humans using such technology. Trust is essential for fact-checking to be applied in the real world. Here, we address fact-checking explainability through question answering. In particular, we propose generating questions and answers from claims and answering the same questions from evidence. We also propose an answer comparison model with an attention mechanism attached to each question. Leveraging question answering as a proxy, we break down automated fact-checking into several steps -- this separation aids models' explainability as it allows for more detailed analysis of their decision-making processes. Experimental results show that the proposed model can achieve state-of-the-art performance while providing reasonable explainable capabilities.
翻译:错误领导信息或虚假信息在世界各地的某些地区造成了混乱。为了缓解这一问题,许多研究人员提出了自动事实核对方法,以打击假新闻的传播。然而,大多数方法无法解释其决定背后的理由,无法用这种技术在机器和人类之间建立信任。信任对于在现实世界中进行事实核对至关重要。在这里,我们通过回答问题来解决事实核对解释问题。特别是,我们建议从索赔中提出问题和答案,并从证据中回答同样的问题。我们还提议了一个回答比较模型,并附着一个关注每个问题的机制。我们利用解答问题作为代理,我们将自动化事实核对分为几个步骤 -- -- 这种分离辅助模型的解释性,因为它有助于更详细地分析其决策过程。实验结果显示,拟议的模型既能达到最先进的表现,又能提供合理的解释能力。