People now see social media sites as their sole source of information due to their popularity. The Majority of people get their news through social media. At the same time, fake news has grown exponentially on social media platforms in recent years. Several artificial intelligence-based solutions for detecting fake news have shown promising results. On the other hand, these detection systems lack explanation capabilities, i.e., the ability to explain why they made a prediction. This paper highlights the current state of the art in explainable fake news detection. We discuss the pitfalls in the current explainable AI-based fake news detection models and present our ongoing research on multi-modal explainable fake news detection model.
翻译:人们现在将社交媒体网站视为其受欢迎的唯一信息来源。 大多数人通过社交媒体获得新闻。 与此同时,近年来在社交媒体平台上,假消息成倍增长。 以人工智能为基础的检测假消息的解决方案已经显示出令人乐观的结果。 另一方面,这些检测系统缺乏解释能力,即无法解释他们作出预测的原因。 本文强调了在可解释的假新闻探测中的最新艺术状态。 我们讨论了目前可解释的基于AI的假新闻探测模型的缺陷,并介绍了我们正在进行的关于多式可解释的假新闻探测模型的研究。