How can society understand and hold accountable complex human and algorithmic decision-making systems whose systematic errors are opaque to the public? These systems routinely make decisions on individual rights and well-being, and on protecting society and the democratic process. Practical and statistical constraints on external audits--such as dimensional complexity--can lead researchers and regulators to miss important sources of error in these complex decision-making systems. In this paper, we design and implement a software-supported approach to audit studies that auto-generates audit materials and coordinates volunteer activity. We implemented this software in the case of political advertising policies enacted by Facebook and Google during the 2018 U.S. election. Guided by this software, a team of volunteers posted 477 auto-generated ads and analyzed the companies' actions, finding systematic errors in how companies enforced policies. We find that software can overcome some common constraints of audit studies, within limitations related to sample size and volunteer capacity.
翻译:社会如何理解和问责复杂的人类和算法决策系统,其系统性错误对公众来说不透明?这些系统定期就个人权利和福祉以及保护社会和民主进程作出决定; 对外部审计的实际限制和统计限制,如维度复杂度等,能够引导研究人员和监管者错过这些复杂决策系统的重要错误源; 在本文件中,我们设计和实施一个软件支持的审计研究方法,以自动生成审计材料并协调志愿人员活动; 在2018年美国选举期间由Facebook和Google颁布的政治广告政策中,我们采用了这一软件。在这个软件的指导下,一个由志愿人员组成的团队张贴了477个自动生成的广告,分析了公司的行动,找出公司如何执行政策的系统性错误。我们发现,软件可以在与抽样规模和志愿人员能力有关的限制范围内,克服审计研究中的一些常见的制约因素。