In recent years, there has been a flurry of research focusing on the fairness of machine learning models, and in particular on quantifying and eliminating bias against subgroups. One prominent line of work generalizes the notion of subgroups beyond simple discrete classes by introducing the notion of a "rich subgroup," and seeks to train models that are calibrated or equalize error rates with respect to these richer subgroup classes. Largely orthogonally, there has been growing recognition of the importance of understanding how subgroups of the dataset are being treated relative to the rest of the dataset. It can easily be shown that certain training features may be significantly more important (or less important) on a discrete subgroup compared to the whole dataset with this difference being called Feature Importance Disparity (FID). However, there are an exponentially large number of rich subgroups defined by a structured class of functions over protected features (such as race, gender, age, etc.) and there are many ways that feature importance can be defined. In this paper, we develop two approaches to efficiently search the rich subgroup space and find feature/subgroup pairs with large FID that fit within a specified subgroup size. The first approach considers feature importance metrics which are separable and models a two-player, zero-sum game to reduce the computation of subgroups with high FID of constrained size to a cost-sensitive classification problem. The second approach considers non-separable importance metrics and uses heuristic optimization techniques to converge on the subgroups. Both of these approaches were tested on 4 different datasets with multiple importance notions and found feature/subgroup pairs that had high FID, often by orders of magnitude, and yield interesting discussions about the reliability and fairness of the datasets.
翻译:近些年来,人们一直忙碌地研究机器学习模型的公平性,特别是量化和消除对分组的偏见。一个突出的工作线通过引入“丰富分组”的概念,概括简单离散类以外的分组概念,并试图培训经过校准或等化这些较富裕分组类别误差率的模型。在很大程度上,人们日益认识到了解如何处理数据集分组相对于数据集其余部分的敏感度的重要性。可以很容易地显示,某些培训特点在与整个数据集相比的离散分组中的重要性(或不太重要)可能大得多(或不太重要 ) 。 与整个数据集相比,这种差异被称作“超异端分组 ” (FID ) 概念。 然而,数量惊人的大量富集的分组由结构性类别的功能来界定与这些较富的分组类别(如种族、性别、年龄等 ), 有许多方法可以确定其重要性。 在本文中,我们开发了两种方法来高效搜索丰富的分组空间,并找到与大直径分组的组合的特征/子组,其特征/子组方法的重要性可能要大得多(或直径直径),而精度的精度的直观的分组方法往往与游戏的精细数组相比,其精度的基数组的数值的数值比值的数值比值的数值的数值比值的数值小的数值小的数值的数值的数值的数值比值的数值比值的数值比值的数值小,其小的数值的数值的数值比值的数值的数值的数值比值的数值比值比值比值比值比值比值比值比值比值比值为高,其高,其高。这些的数值比值比值比值的基数组的基值比值的数值比值的数值比值的数值小。</s>