Existing work on fairness typically focuses on making known machine learning algorithms fairer. Fair variants of classification, clustering, outlier detection and other styles of algorithms exist. However, an understudied area is the topic of auditing an algorithm's output to determine fairness. Existing work has explored the two group classification problem for binary protected status variables using standard definitions of statistical parity. Here we build upon the area of auditing by exploring the multi-group setting under more complex definitions of fairness.
翻译:关于公平问题的现有工作通常侧重于使已知的机器学习算法更加公平。存在着公平的分类、分组、异端检测和其他算法的变体。然而,一个研究不足的领域是审计算法产出以确定公平性。现有工作探索了使用统计均等标准定义的二进制保护状况变量的两个组分类问题。我们在这方面以审计领域为基础,根据更复杂的公平定义探索多组设置。