The field of automated machine learning (AutoML) introduces techniques that automate parts of the development of machine learning (ML) systems, accelerating the process and reducing barriers for novices. However, decisions derived from ML models can reproduce, amplify, or even introduce unfairness in our societies, causing harm to (groups of) individuals. In response, researchers have started to propose AutoML systems that jointly optimize fairness and predictive performance to mitigate fairness-related harm. However, fairness is a complex and inherently interdisciplinary subject, and solely posing it as an optimization problem can have adverse side effects. With this work, we aim to raise awareness among developers of AutoML systems about such limitations of fairness-aware AutoML, while also calling attention to the potential of AutoML as a tool for fairness research. We present a comprehensive overview of different ways in which fairness-related harm can arise and the ensuing implications for the design of fairness-aware AutoML. We conclude that while fairness cannot be automated, fairness-aware AutoML can play an important role in the toolbox of an ML practitioner. We highlight several open technical challenges for future work in this direction. Additionally, we advocate for the creation of more user-centered assistive systems designed to tackle challenges encountered in fairness work.
翻译:自动机器学习(Automal)领域引入了机械学习(ML)系统开发的自动化部分技术,加快了进程,减少了新手的障碍,然而,从ML模型中产生的决定可以复制、扩大甚至给我们的社会带来不公平,对(群体)个人造成损害。对此,研究人员已开始提出自动ML系统,共同优化公平性和预测性能,以减少与公平有关的伤害。然而,公平是一个复杂且内在的跨学科主题,仅作为优化问题,可能会产生负面的副作用。通过这项工作,我们力求提高自动ML系统开发者对公平意识自动ML的这种局限性的认识,同时呼吁注意自动ML作为公平研究工具的潜力。我们全面概述了与公平有关的伤害可能发生的不同方式,以及由此产生的对公平意识自动洗钱设计的影响。我们的结论是,虽然公平性不能自动化,但公平意识自动在ML从业人员的工具箱中可以发挥重要作用。我们强调未来工作面临的一些公开技术挑战。我们主张在创建更公平的系统方面应对挑战。</s>