Fake news causes significant damage to society.To deal with these fake news, several studies on building detection models and arranging datasets have been conducted. Most of the fake news datasets depend on a specific time period. Consequently, the detection models trained on such a dataset have difficulty detecting novel fake news generated by political changes and social changes; they may possibly result in biased output from the input, including specific person names and organizational names. We refer to this problem as \textbf{Diachronic Bias} because it is caused by the creation date of news in each dataset. In this study, we confirm the bias, especially proper nouns including person names, from the deviation of phrase appearances in each dataset. Based on these findings, we propose masking methods using Wikidata to mitigate the influence of person names and validate whether they make fake news detection models robust through experiments with in-domain and out-of-domain data.
翻译:假新闻对社会造成重大损害。 为了处理这些假新闻, 已经进行了几项关于建立检测模型和安排数据集的研究。 大多数假新闻数据集都取决于特定的时间段。 因此, 以这种数据集培训的检测模型难以检测出政治变化和社会变化产生的新假新闻; 它们可能造成输入结果的偏差, 包括具体个人名称和组织名称。 我们将此问题称为\ textbf{ Diachronic Bias}, 因为它是由每个数据集中新闻创建日期造成的。 在这项研究中, 我们确认每个数据集中出现的词句偏差, 特别是适当的名词, 包括个人姓名。 基于这些发现, 我们提议使用维基数据掩盖方法, 以减轻个人姓名的影响, 并验证他们是否通过现场和外部数据实验, 使假新闻检测模型变得强大。