Faced with the scale and surge of misinformation on social media, many platforms and fact-checking organizations have turned to algorithms for automating key parts of misinformation detection pipelines. While offering a promising solution to the challenge of scale, the ethical and societal risks associated with algorithmic misinformation detection are not well-understood. In this paper, we employ and extend upon the notion of informational justice to develop a framework for explicating issues of justice relating to representation, participation, distribution of benefits and burdens, and credibility in the misinformation detection pipeline. Drawing on the framework: (1) we show how injustices materialize for stakeholders across three algorithmic stages in the pipeline; (2) we suggest empirical measures for assessing these injustices; and (3) we identify potential sources of these harms. This framework should help researchers, policymakers, and practitioners reason about potential harms or risks associated with these algorithms and provide conceptual guidance for the design of algorithmic fairness audits in this domain.
翻译:面对社交媒体中错误信息的规模和激增,许多平台和事实审查组织都转向了将错误信息探测管道的关键部分自动化的算法。在为规模挑战提供有希望的解决办法的同时,与算法错误探测有关的伦理和社会风险并没有得到很好理解。在本文件中,我们利用并扩展信息公正概念,以制定一个框架,用以在错误信息检测管道中解释与代表性、参与、利益和负担的分配以及可信度有关的司法问题。利用框架:(1) 我们展示了在管道中三个算法阶段利益攸关方是如何实现不公正现象的;(2) 我们提出了评估这些不公正现象的经验性措施;(3) 我们找出这些伤害的潜在来源。 这一框架应当帮助研究人员、决策者和从业人员解释与这些算法相关的潜在伤害或风险,并为设计该领域的算法公正审计提供概念性指导。