Algorithms provide powerful tools for detecting and dissecting human bias and error. Here, we develop machine learning methods to to analyze how humans err in a particular high-stakes task: image interpretation. We leverage a unique dataset of 16,135,392 human predictions of whether a neighborhood voted for Donald Trump or Joe Biden in the 2020 US election, based on a Google Street View image. We show that by training a machine learning estimator of the Bayes optimal decision for each image, we can provide an actionable decomposition of human error into bias, variance, and noise terms, and further identify specific features (like pickup trucks) which lead humans astray. Our methods can be applied to ensure that human-in-the-loop decision-making is accurate and fair and are also applicable to black-box algorithmic systems.
翻译:演算法为探测和分解人类偏向和错误提供了强有力的工具。 在这里, 我们开发了机器学习方法, 分析人类如何在某个高端任务中犯错: 图像判读。 我们利用一个16,135,392个独特的数据集, 根据谷歌街景图像, 预测一个社区在2020年美国选举中是否投票支持Donald Trump 或Joe Biden 。 我们通过培训一个机器学习Bayes 最佳决定仪, 我们可以为每张图像提供一个可操作的人类错误分解成偏差、差异和噪音的术语, 并进一步识别导致人类误差的具体特征( 如皮卡车 ) 。 我们的方法可以用来确保当权人的决策是准确和公平, 并且也适用于黑盒算法系统 。