In this paper we revisit the bias-variance decomposition of model error from the perspective of designing a fair classifier: we are motivated by the widely held socio-technical belief that noise variance in large datasets in social domains tracks demographic characteristics such as gender, race, disability, etc. We propose a conditional-iid (ciid) model built from group-specific classifiers that seeks to improve on the trade-offs made by a single model (iid setting). We theoretically analyze the bias-variance decomposition of different models in the Gaussian Mixture Model, and then empirically test our setup on the COMPAS and folktables datasets. We instantiate the ciid model with two procedures that improve "fairness" by conditioning out undesirable effects: first, by conditioning directly on sensitive attributes, and second, by clustering samples into groups and conditioning on cluster membership (blind to protected group membership). Our analysis suggests that there might be principled procedures and concrete real-world use cases under which conditional models are preferred, and our striking empirical results strongly indicate that non-iid settings, such as the ciid setting proposed here, might be more suitable for big data applications in social contexts.
翻译:在本文中,我们从设计一个公平的分类器的角度重新审视模型错误的偏差分解:我们之所以重新审视模型错误的偏差偏差分解:我们是因为广泛持有的社会技术信念,即社会领域大型数据集的噪音差异会影响性别、种族、残疾等人口特征。我们提出一个有条件的二(cid)模型,该模型由特定群体分类器建立,力求改进单一模型(二)的权衡取舍。我们从理论上分析了高山混合模型中不同模型的偏差分差分解情况,然后从经验上测试了我们在COMPAS和民俗数据集上的设置。我们用两种程序来即时,通过调节不良效应来改善“公平性 ” :首先,直接调整敏感属性,其次,将样本组合成群体,并限制集群成员(无视受保护群体成员资格) 。我们的分析表明,可能存在原则性的程序和具体的现实世界使用案例,在其中选择有条件模型,我们惊人的经验结果有力地表明,非二环境环境,例如此处提议的ciid设置,可能更适合在社会环境中应用大型数据。