In 2019, the UK's Immigration and Asylum Chamber of the Upper Tribunal dismissed an asylum appeal basing the decision on the output of a biometric system, alongside other discrepancies. The fingerprints of the asylum seeker were found in a biometric database which contradicted the appellant's account. The Tribunal found this evidence unequivocal and denied the asylum claim. Nowadays, the proliferation of biometric systems is shaping public debates around its political, social and ethical implications. Yet whilst concerns towards the racialised use of this technology for migration control have been on the rise, investment in the biometrics industry and innovation is increasing considerably. Moreover, fairness has also been recently adopted by biometrics to mitigate bias and discrimination on biometrics. However, algorithmic fairness cannot distribute justice in scenarios which are broken or intended purpose is to discriminate, such as biometrics deployed at the border. In this paper, we offer a critical reading of recent debates about biometric fairness and show its limitations drawing on research in fairness in machine learning and critical border studies. Building on previous fairness demonstrations, we prove that biometric fairness criteria are mathematically mutually exclusive. Then, the paper moves on illustrating empirically that a fair biometric system is not possible by reproducing experiments from previous works. Finally, we discuss the politics of fairness in biometrics by situating the debate at the border. We claim that bias and error rates have different impact on citizens and asylum seekers. Fairness has overshadowed the elephant in the room of biometrics, focusing on the demographic biases and ethical discourses of algorithms rather than examine how these systems reproduce historical and political injustices.
翻译:2019年,英国上法庭移民和庇护分庭驳回了一项基于生物鉴别系统产出的决定以及其他差异的庇护上诉。寻求庇护者的指纹是在一个与上诉人的陈述相反的生物鉴别数据库中找到的。法庭认为这一证据是明确无误的,否定了庇护要求。现在,生物鉴别系统的扩散正在影响公众围绕其政治、社会和伦理影响的辩论。虽然人们对这种技术在移民控制方面被种族化地使用的关切在上升,但对生物鉴别技术产业和创新的投资也在大量增加。此外,最近生物鉴别技术也采用了公平性,以减轻生物鉴别技术的偏向和歧视。然而,在破坏或意图目的不同的情景中,如在边境部署的生物鉴别技术时,算法公正性无法分配正义性。在本文件中,我们批判地阅读了最近关于生物鉴别公平性的辩论,并展示了在机器学习和批判性边境研究中的公平性研究的局限性。在以往的公平性示范的基础上,我们证明生物鉴别性公平性公平性标准是相互排斥的。随后,文件又从经验的角度来说明,公平的生物鉴别系统不可能通过重新确定历史鉴别性原则的准确性,而使公民在以往的种族鉴别性上更难分辨测。最后,我们讨论的是,我们从历史鉴别性理论论论论论论论论论论论论论论的理论,我们从历史推论的理论的理论的理论的理论的理论的正确性研究。