Advances in algorithmic fairness have largely omitted sexual orientation and gender identity. We explore queer concerns in privacy, censorship, language, online safety, health, and employment to study the positive and negative effects of artificial intelligence on queer communities. These issues underscore the need for new directions in fairness research that take into account a multiplicity of considerations, from privacy preservation, context sensitivity and process fairness, to an awareness of sociotechnical impact and the increasingly important role of inclusive and participatory research processes. Most current approaches for algorithmic fairness assume that the target characteristics for fairness--frequently, race and legal gender--can be observed or recorded. Sexual orientation and gender identity are prototypical instances of unobserved characteristics, which are frequently missing, unknown or fundamentally unmeasurable. This paper highlights the importance of developing new approaches for algorithmic fairness that break away from the prevailing assumption of observed characteristics.
翻译:我们探讨隐私、审查、语言、在线安全、健康和就业方面的同性恋关切,以研究人工智能对同性恋社区的正面和负面影响;这些问题突出表明,需要在公平研究方面采取新的方向,考虑到从隐私保护、背景敏感性和过程公平到社会技术影响以及包容性和参与性研究进程日益重要的作用等多种考虑,从保护隐私、对背景的敏感性和过程公平到认识社会技术影响和包容性和参与性研究进程日益重要的作用等。目前大多数关于公平性的现行方法假定,公平性、种族和法律性别的目标特征可以被观察到或记录。性取向和性别认同是未观察到的特征的典型例子,这些特征经常缺失、未知或根本无法计量。本文强调,必须制定新的算法公平方法,摆脱普遍观察到的特征假设。