In many real life situations, including job and loan applications, gatekeepers must make justified and fair real-time decisions about a person's fitness for a particular opportunity. In this paper, we aim to accomplish approximate group fairness in an online stochastic decision-making process, where the fairness metric we consider is equalized odds. Our work follows from the classical learning-from-experts scheme, assuming a finite set of classifiers (human experts, rules, options, etc) that cannot be modified. We run separate instances of the algorithm for each label class as well as sensitive groups, where the probability of choosing each instance is optimized for both fairness and regret. Our theoretical results show that approximately equalized odds can be achieved without sacrificing much regret. We also demonstrate the performance of the algorithm on real data sets commonly used by the fairness community.
翻译:在许多现实生活中,包括工作和贷款申请,看门人必须就一个人是否适合某个特定机会做出合理和公正的实时决定。 在本文中,我们的目标是在网上随机决策过程中实现大致群体公平,我们认为公平性衡量标准是相等的。我们的工作来自古典的从专家学习计划,假设一组有限的分类(人类专家、规则、选项等)是无法修改的。我们为每个标签类别和敏感群体分别运行了算法实例,其中选择每个案例的概率都是为公平和遗憾而优化的。我们的理论结果表明,在不牺牲太多遗憾的情况下,可以实现大致的均等。我们还展示了公平社会通常使用的真实数据集的算法表现。