We study critical systems that allocate scarce resources to satisfy basic needs, such as homeless services that provide housing. These systems often support communities disproportionately affected by systemic racial, gender, or other injustices, so it is crucial to design these systems with fairness considerations in mind. To address this problem, we propose a framework for evaluating fairness in contextual resource allocation systems that is inspired by fairness metrics in machine learning. This framework can be applied to evaluate the fairness properties of a historical policy, as well as to impose constraints in the design of new (counterfactual) allocation policies. Our work culminates with a set of incompatibility results that investigate the interplay between the different fairness metrics we propose. Notably, we demonstrate that: 1) fairness in allocation and fairness in outcomes are usually incompatible; 2) policies that prioritize based on a vulnerability score will usually result in unequal outcomes across groups, even if the score is perfectly calibrated; 3) policies using contextual information beyond what is needed to characterize baseline risk and treatment effects can be fairer in their outcomes than those using just baseline risk and treatment effects; and 4) policies using group status in addition to baseline risk and treatment effects are as fair as possible given all available information. Our framework can help guide the discussion among stakeholders in deciding which fairness metrics to impose when allocating scarce resources.
翻译:我们研究如何分配稀缺资源以满足基本需要的关键系统,例如提供住房的无家可归服务等,这些系统往往为受到系统性种族、性别或其他不公正现象严重影响的社区提供不相称的支持,因此,在设计这些系统时必须铭记公平因素; 为了解决这个问题,我们提议了一个框架,用以评价背景资源分配制度中的公平性,这种框架的灵感来自机器学习中的公平指标; 这个框架可以用来评价历史政策的公平性,并对制定新的(对抗性)分配政策施加限制; 我们的工作最终产生一套不相容的结果,调查我们提出的不同公平衡量标准之间的相互作用。 特别是,我们证明:(1) 分配的公平性和结果的公平性通常是不相容的;(2) 基于脆弱性得分的优先政策通常会导致各群体之间结果不平等,即使得分是完全校准的;(3) 使用超出确定基线风险和治疗效果所需背景信息的政策比仅仅使用基线风险和治疗效果的政策要公平;(4) 利用群体地位以及基线风险和治疗效果的政策尽可能公平。