As they have a vital effect on social decision-making, AI algorithms not only should be accurate and but also should not pose unfairness against certain sensitive groups (e.g., non-white, women). Various specially designed AI algorithms to ensure trained AI models to be fair between sensitive groups have been developed. In this paper, we raise a new issue that between-group fair AI models could treat individuals in a same sensitive group unfairly. We introduce a new concept of fairness so-called within-group fairness which requires that AI models should be fair for those in a same sensitive group as well as those in different sensitive groups. We materialize the concept of within-group fairness by proposing corresponding mathematical definitions and developing learning algorithms to control within-group fairness and between-group fairness simultaneously. Numerical studies show that the proposed learning algorithms improve within-group fairness without sacrificing accuracy as well as between-group fairness.
翻译:由于大赦国际的算法对社会决策具有重要影响,因此,大赦国际的算法不仅应该准确,而且不应该对某些敏感群体(如非白人、妇女)造成不公平。已经制定了各种专门设计的大赦国际算法,以确保经过培训的大赦国际模型在敏感群体之间公平。在本文件中,我们提出了一个新问题,即集团公平大赦国际模型可以不公平地对待同一敏感群体中的个人。我们引入了所谓的集团内部公平的新概念,要求大赦国际模型对同一敏感群体中的人以及不同敏感群体中的人公平。我们通过提出相应的数学定义和制定学习算法,同时控制群体内部公平和群体之间的公平,实现集团内部公平概念。数字研究表明,拟议的学习算法可以改善群体内部公平,同时不牺牲准确性和群体之间的公平。