Discriminatory practices involving AI-driven police work have been the subject of much controversies in the past few years, with algorithms such as COMPAS, PredPol and ShotSpotter being accused of unfairly impacting minority groups. At the same time, the issues of fairness in machine learning, and in particular in computer vision, have been the subject of a growing number of academic works. In this paper, we examine how these area intersect. We provide information on how these practices have come to exist and the difficulties in alleviating them. We then examine three applications currently in development to understand what risks they pose to fairness and how those risks can be mitigated.
翻译:过去几年来,涉及AI驱动的警察工作的歧视性做法引起了许多争议,诸如COMPAS、PredPol和ShotSpotter等算法被指控不公平地对少数群体造成影响,与此同时,机器学习的公平问题,特别是计算机视觉方面的公平问题,成为越来越多的学术著作的主题,在本文件中,我们研究这些领域的相互交织之处,我们提供资料,说明这些做法是如何形成的,在减轻这些做法方面有何困难,然后我们审查目前正在开发的三种应用,了解它们对公平构成何种风险,如何减轻这些风险。