Fairness is a concept of justice. Various definitions exist, some of them conflicting with each other. In the absence of an uniformly accepted notion of fairness, choosing the right kind for a specific situation has always been a central issue in human history. When it comes to implementing sustainable fairness in artificial intelligence systems, this old question plays a key role once again: How to identify the most appropriate fairness metric for a particular application? The answer is often a matter of context, and the best choice depends on ethical standards and legal requirements. Since ethics guidelines on this topic are kept rather general for now, we aim to provide more hands-on guidance with this document. Therefore, we first structure the complex landscape of existing fairness metrics and explain the different options by example. Furthermore, we propose the "Fairness Compass", a tool which formalises the selection process and makes identifying the most appropriate fairness definition for a given system a simple, straightforward procedure. Because this process also allows to document the reasoning behind the respective decisions, we argue that this approach can help to build trust from the user through explaining and justifying the implemented fairness.
翻译:公平是正义的一个概念。 各种定义存在,有些定义相互冲突。 在缺乏统一接受的公平概念的情况下,选择适合特定情况的正确类型一直是人类历史上的一个核心问题。 当涉及到在人工智能系统中实施可持续公平时,这个老问题再次发挥关键作用:如何为特定应用确定最适当的公平度量? 答案往往是一个背景问题,最佳选择取决于道德标准和法律要求。 由于关于这个主题的道德准则目前相当笼统,我们的目标是为该文件提供更多的实际指导。 因此,我们首先构建现有的公平度量度的复杂环境,并以实例解释不同的选择。 此外,我们提出“公平度量度”这一工具,将选择过程正规化,并将确定特定系统最适当的公平性定义变成一个简单、直接的程序。由于这一过程还允许记录相关决定背后的推理,因此我们主张这一方法有助于通过解释和证明执行的公平性来建立用户的信任。