Machine learning systems are often deployed for making critical decisions like credit lending, hiring, etc. While making decisions, such systems often encode the user's demographic information (like gender, age) in their intermediate representations. This can lead to decisions that are biased towards specific demographics. Prior work has focused on debiasing intermediate representations to ensure fair decisions. However, these approaches fail to remain fair with changes in the task or demographic distribution. To ensure fairness in the wild, it is important for a system to adapt to such changes as it accesses new data in an incremental fashion. In this work, we propose to address this issue by introducing the problem of learning fair representations in an incremental learning setting. To this end, we present Fairness-aware Incremental Representation Learning (FaIRL), a representation learning system that can sustain fairness while incrementally learning new tasks. FaIRL is able to achieve fairness and learn new tasks by controlling the rate-distortion function of the learned representations. Our empirical evaluations show that FaIRL is able to make fair decisions while achieving high performance on the target task, outperforming several baselines.
翻译:在作出决定时,这种系统往往将用户的人口信息(如性别、年龄)输入中间代表机构。这可能导致偏向特定人口群体的决定。以前的工作重点是贬低中间代表机构,以确保公平决定。然而,由于任务或人口分布的变化,这些方法无法保持公平。为了确保在野外的公平性,一个系统必须适应这种变化,因为它以渐进的方式获取新的数据。在这项工作中,我们提议通过在渐进学习环境中引入学习公平代表性的问题来解决这一问题。为此,我们提出公平了解增加代表性学习(FAIRL),这是一个代表学习系统,在逐步学习新任务的同时能够保持公平。FaIRL能够通过控制所学的表达机构的费率扭曲功能实现公平并学习新任务。我们的经验评估表明,FIRL能够在目标任务取得高业绩的同时做出公平的决定,比几个基线要好。