We study an online learning problem subject to the constraint of individual fairness, which requires that similar individuals are treated similarly. Unlike prior work on individual fairness, we do not assume the similarity measure among individuals is known, nor do we assume that such measure takes a certain parametric form. Instead, we leverage the existence of an auditor who detects fairness violations without enunciating the quantitative measure. In each round, the auditor examines the learner's decisions and attempts to identify a pair of individuals that are treated unfairly by the learner. We provide a general reduction framework that reduces online classification in our model to standard online classification, which allows us to leverage existing online learning algorithms to achieve sub-linear regret and number of fairness violations. Surprisingly, in the stochastic setting where the data are drawn independently from a distribution, we are also able to establish PAC-style fairness and accuracy generalization guarantees (Yona and Rothblum [2018]), despite only having access to a very restricted form of fairness feedback. Our fairness generalization bound qualitatively matches the uniform convergence bound of Yona and Rothblum [2018], while also providing a meaningful accuracy generalization guarantee. Our results resolve an open question by Gillen et al. [2018] by showing that online learning under an unknown individual fairness constraint is possible even without assuming a strong parametric form of the underlying similarity measure.
翻译:我们研究的是受个人公平制约的在线学习问题,这要求类似个人得到类似的对待。与以往关于个人公平的工作不同,我们并不认为个人之间的类似措施是已知的,我们也不认为这种措施具有某种参数形式。相反,我们利用审计员的优势,在不说明量化措施的情况下,发现违反公平的情况。在每一轮中,审计员审查学习者的决定,并试图确定一对受到学习者不公正待遇的个人。我们提供了一个总的减少框架,将我们模式中的在线分类降低到标准的在线分类,从而使我们能够利用现有的在线学习算法实现次线性遗憾和违反公平的次数。令人惊讶的是,在数据独立于分布的随机分析环境中,我们还能够确立PAC式的公平和准确性保证(Yona和Rothblum [2018]),尽管我们只能获得非常有限的公平反馈形式。我们的公平性概括性约束在质量上与Yona和Rothblum(2018)]的统一性一致,使我们能够利用现有的在线精确性保证,同时在不以公开性衡量方式下,通过不以不言自明的精确性的方式,通过一种不言理会式的衡量结果。