Fake news detection algorithms apply machine learning to various news attributes and their relationships. However, their success is usually evaluated based on how the algorithm performs on a static benchmark, independent of real users. On the other hand, studies of user trust in fake news has identified relevant factors such as the user's previous beliefs, the article format, and the source's reputation. We present a user study (n=40) evaluating how warnings issued by fake news detection algorithms affect the user's ability to detect misinformation. We find that such warnings strongly influence users' perception of the truth, that even a moderately accurate classifier can improve overall user accuracy, and that users tend to be biased towards agreeing with the algorithm, even when it is incorrect.
翻译:假新闻检测算法将机器学习应用于各种新闻属性及其关系。然而,它们的成功通常是基于算法在独立于真实用户的静态基准上表现的评估。另一方面,有关用户信任虚假新闻的研究已经确定了相关因素,如用户先前的信念、文章格式和来源的声誉。我们进行了一项用户研究(n=40),评估了虚假新闻检测算法发出的警告如何影响用户检测错误信息的能力。我们发现,这种警告强烈地影响用户对真相的感知,即使是一种中等准确率的分类器也可以提高用户的整体准确率,并且即使算法错误,用户也倾向于同意算法的观点。