To implement fair machine learning in a sustainable way, choosing the right fairness objective is key. Since fairness is a concept of justice which comes in various, sometimes conflicting definitions, this is not a trivial task though. The most appropriate fairness definition for an artificial intelligence (AI) system is a matter of ethical standards and legal requirements, and the right choice depends on the particular use case and its context. In this position paper, we propose to use a decision tree as means to explain and justify the implemented kind of fairness to the end users. Such a structure would first of all support AI practitioners in mapping ethical principles to fairness definitions for a concrete application and therefore make the selection a straightforward and transparent process. However, this approach would also help document the reasoning behind the decision making. Due to the general complexity of the topic of fairness in AI, we argue that specifying "fairness" for a given use case is the best way forward to maintain confidence in AI systems. In this case, this could be achieved by sharing the reasons and principles expressed during the decision making process with the broader audience.
翻译:为了以可持续的方式实施公平的机器学习,选择正确的公平目标是关键。公平性是一个正义的概念,来自各种有时相互冲突的定义,但这并不是一个微不足道的任务。人工智能(AI)系统最适当的公平定义是一个道德标准和法律要求问题,正确的选择取决于特定使用案例及其背景。在本立场文件中,我们提议使用决策树作为解释和证明对最终用户所实施的那种公平性的手段。这种结构首先将支持AI从业者为具体应用制定公平定义的道德原则,从而使选择成为直接和透明的过程。然而,这一方法也有助于记录决策背后的理由。由于AI中公平问题的一般复杂性,我们认为,为特定使用案例具体说明“公平性”是保持对AI系统信任的最佳途径。在这种情况下,通过与更广泛的受众分享决策过程中表达的理由和原则,可以做到这一点。