We study the linear contextual bandit problem where an agent has to select one candidate from a pool and each candidate belongs to a sensitive group. In this setting, candidates' rewards may not be directly comparable between groups, for example when the agent is an employer hiring candidates from different ethnic groups and some groups have a lower reward due to discriminatory bias and/or social injustice. We propose a notion of fairness that states that the agent's policy is fair when it selects a candidate with highest relative rank, which measures how good the reward is when compared to candidates from the same group. This is a very strong notion of fairness, since the relative rank is not directly observed by the agent and depends on the underlying reward model and on the distribution of rewards. Thus we study the problem of learning a policy which approximates a fair policy under the condition that the contexts are independent between groups and the distribution of rewards of each group is absolutely continuous. In particular, we design a greedy policy which at each round constructs a ridge regression estimator from the observed context-reward pairs, and then computes an estimate of the relative rank of each candidate using the empirical cumulative distribution function. We prove that the greedy policy achieves, after $T$ rounds, up to log factors and with high probability, a fair pseudo-regret of order $\sqrt{dT}$, where $d$ is the dimension of the context vectors. The policy also satisfies demographic parity at each round when averaged over all possible information available before the selection. We finally show with a proof of concept simulation that our policy achieves sub-linear fair pseudo-regret also in practice.
翻译:我们研究了线性背景强盗问题,即代理人必须从人才库中挑选一名候选人,而每个候选人都属于敏感群体。在这一背景下,候选人的奖赏可能无法在群体之间直接可比,例如,如果代理人是雇用不同族裔群体候选人的雇主,而某些群体由于歧视性偏见和/或社会不公正而得到的奖赏较低。我们提出了一个公平的概念,即代理人的政策在选择一个相对级别最高的候选人时是公平的,它衡量奖赏与同一群体候选人相比有多好。这是一个非常强烈的公平概念,因为相对级别不是代理人直接观察的,而是取决于基本奖赏模式和奖赏的分配。因此,我们研究的是学习一种政策的问题,这种政策与公平政策相近,条件是该政策在群体之间是独立的,而每个群体的报酬分配是绝对持续的。特别是,我们设计了一种贪婪的政策,在每轮中,从观察到的上下个直线对底线对底线,然后对每个候选人的相对等级进行估算,使用经验性累积分配功能。我们最后证明,在每轮值政策中,每轮均值政策中,我们以正值为正值为正值。