Recent work in fair machine learning has proposed dozens of technical definitions of algorithmic fairness and methods for enforcing these definitions. However, we still lack an understanding of how to develop machine learning systems with fairness criteria that reflect relevant stakeholders' nuanced viewpoints in real-world contexts. To address this gap, we propose a framework for eliciting stakeholders' subjective fairness notions. Combining a user interface that allows stakeholders to examine the data and the algorithm's predictions with an interview protocol to probe stakeholders' thoughts while they are interacting with the interface, we can identify stakeholders' fairness beliefs and principles. We conduct a user study to evaluate our framework in the setting of a child maltreatment predictive system. Our evaluations show that the framework allows stakeholders to comprehensively convey their fairness viewpoints. We also discuss how our results can inform the design of predictive systems.
翻译:最近关于公平机器学习的工作提出了数十项关于算法公平的技术定义以及执行这些定义的方法,然而,我们仍然对如何开发符合公平标准的机器学习系统缺乏了解,这些系统反映了现实世界中相关利益攸关方的细微观点。为解决这一差距,我们提出了一个框架,以征求利益攸关方的主观公平概念。将用户界面与访谈协议结合起来,使利益攸关方能够审查数据和算法的预测,以探究利益攸关方与界面互动时的想法。我们可以确定利益攸关方的公平信仰和原则。我们开展用户研究,评估我们在儿童虐待预测系统设置方面的框架。我们的评估表明,框架允许利益攸关方全面传达其公平观点。我们还讨论了我们的成果如何为预测系统的设计提供信息。