Can crowd workers be trusted to judge whether news-like articles circulating on the Internet are misleading, or does partisanship and inexperience get in the way? And can the task be structured in a way that reduces partisanship? We assembled pools of both liberal and conservative crowd raters and tested three ways of asking them to make judgments about 374 articles. In a no research condition, they were just asked to view the article and then render a judgment. In an individual research condition, they were also asked to search for corroborating evidence and provide a link to the best evidence they found. In a collective research condition, they were not asked to search, but instead to review links collected from workers in the individual research condition. Both research conditions reduced partisan disagreement in judgments. The individual research condition was most effective at producing alignment with journalists' assessments. In this condition, the judgments of a panel of sixteen or more crowd workers were better than that of a panel of three expert journalists, as measured by alignment with a held out journalist's ratings.
翻译:互联网上流传的类似新闻文章是否具有误导性,众包工人是否可以信任他们的判断,或者政党和经验是否会影响他们的判断?并且,任务是否可以被构建以减少政党偏见?我们组织了自由派和保守派工人的队伍,并测试了三种方法来要求他们对374篇文章做出判断。在无研究条件下,他们只被要求查看文章然后作出判断。在个体研究条件下,他们还被要求搜索证明材料并提供他们找到的最佳证明的链接。在集体研究条件下,他们不被要求搜索,而是审查由个体研究条件下的工人收集的链接。两种研究条件都减少了在判断中的党派分歧。个体研究条件在产生与记者评估的一致性方面最为有效。在这种条件下,十六个或更多众包工人组成的小组的判断比由三个专业记者组成的小组的判断更为准确,这是通过与某位保留的记者评级的一致性来衡量的。