The widespread of false information is a rising concern worldwide with critical social impact, inspiring the emergence of fact-checking organizations to mitigate misinformation dissemination. However, human-driven verification leads to a time-consuming task and a bottleneck to have checked trustworthy information at the same pace they emerge. Since misinformation relates not only to the content itself but also to other social features, this paper addresses automatic misinformation checking in social networks from a multimodal perspective. Moreover, as simply naming a piece of news as incorrect may not convince the citizen and, even worse, strengthen confirmation bias, the proposal is a modality-level explainable-prone misinformation classifier framework. Our framework comprises a misinformation classifier assisted by explainable methods to generate modality-oriented explainable inferences. Preliminary findings show that the misinformation classifier does benefit from multimodal information encoding and the modality-oriented explainable mechanism increases both inferences' interpretability and completeness.
翻译:由于错误信息不仅涉及内容本身,而且涉及其他社会特点,本文从多式联运的角度探讨社交网络中自动进行错误信息检查的问题。此外,仅仅将一则不正确的新闻命名为不正确的新闻也许不能说服公民,更糟糕的是,加强确认偏见,这个提议是一个模式层面易解的错误分类框架。我们的框架包括一个错误信息分类器,由可解释的方法协助,产生面向模式的可解释推论。初步结果表明,错误信息分类器确实受益于多式信息编码,而以模式为导向的解释机制则增加了可解释性和完整性。