In an ideal world, deployed machine learning models will enhance our society. We hope that those models will provide unbiased and ethical decisions that will benefit everyone. However, this is not always the case; issues arise from the data curation process to the models' deployment. The continued use of biased datasets and processes will adversely damage communities and increase the cost to fix the problem. In this work, we walk through the decision process that a researcher will need to make before, during, and after their project to consider the broader impacts of research and the community. Throughout this paper, we observe the critical decisions that are often overlooked when deploying AI, argue for the use of fairness forensics to discover bias and fairness issues in systems, assert the need for a responsible human-over-the-loop to bring accountability into the deployed system, and finally, reflect on the need to explore research agendas that have harmful societal impacts. We examine visual privacy research and draw lessons that can apply broadly to Artificial Intelligence. Our goal is to provide a systematic analysis of the machine learning pipeline for visual privacy and bias issues. With this pipeline, we hope to raise stakeholder (e.g., researchers, modelers, corporations) awareness as these issues propagate in the various machine learning phases.
翻译:在一个理想的世界中,部署的机器学习模式将提升我们的社会。我们希望这些模式将提供有利于每个人的公正和道德决定。但情况并非总是如此;问题来自数据整理过程到模型的部署;继续使用偏向的数据集和程序将对社区产生不利影响,并增加解决问题的成本。在这项工作中,我们走过研究人员在项目之前、期间和项目之后需要做的决定过程,以考虑研究和社区的更广泛影响。在整个文件中,我们观察在部署AI时经常被忽视的关键决定,主张使用公平法证来发现系统中的偏向和公平问题,主张需要负责任的人为地将问责制带入部署的系统,最后,思考探索具有有害社会影响的研究议程的必要性。我们研究视觉隐私权研究,并总结可以广泛适用于人工智能情报的经验教训。我们的目标是对视觉隐私和偏见问题的机器学习管道进行系统分析。我们希望通过这一管道来提高利益攸关方(例如研究人员、建模者、公司)的认识,作为各种机器学习阶段。