To implement fair machine learning in a sustainable way, identifying the right fairness definition is key. However, fairness is a concept of justice, and various definitions exist. Some of them are in conflict with each other and there is no uniformly accepted notion of fairness. The most appropriate fairness definition for an artificial intelligence system is often a matter of application and the right choice depends on ethical standards and legal requirements. In the absence of officially binding rules, the objective of this document is to structure the complex landscape of existing fairness definitions. We propose the "Fairness Compass", a tool which formalises the selection process and makes identifying the most appropriate fairness metric for a given system a simple, straightforward procedure. We further argue that documenting the reasoning behind the respective decisions in the course of this process can help to build trust from the user through explaining and justifying the implemented fairness.
翻译:为了以可持续的方式实施公平的机器学习,确定正确的公平定义是关键。然而,公平是一个正义的概念,存在各种定义。其中一些是相互冲突,没有统一接受的公平概念。人为情报系统最适当的公平定义往往是一个适用问题,而正确的选择取决于道德标准和法律要求。在没有正式有约束力的规则的情况下,本文件的目标是构建现有公平定义的复杂景观。我们提议了“公平指南”这一工具,它使甄选过程正规化,为特定系统确定最适当的公平标准成为简单、直截了当的程序。我们进一步争辩说,记录这一进程中各项决定背后的理由有助于通过解释和说明执行的公平来建立用户的信任。