Pairwise learning strategies are prevalent for optimizing recommendation models on implicit feedback data, which usually learns user preference by discriminating between positive (i.e., clicked by a user) and negative items (i.e., obtained by negative sampling). However, the size of different item groups (specified by item attribute) is usually unevenly distributed. We empirically find that the commonly used uniform negative sampling strategy for pairwise algorithms (e.g., BPR) can inherit such data bias and oversample the majority item group as negative instances, severely countering group fairness on the item side. In this paper, we propose a Fairly adaptive Negative sampling approach (FairNeg), which improves item group fairness via adaptively adjusting the group-level negative sampling distribution in the training process. In particular, it first perceives the model's unfairness status at each step and then adjusts the group-wise sampling distribution with an adaptive momentum update strategy for better facilitating fairness optimization. Moreover, a negative sampling distribution Mixup mechanism is proposed, which gracefully incorporates existing importance-aware sampling techniques intended for mining informative negative samples, thus allowing for achieving multiple optimization purposes. Extensive experiments on four public datasets show our proposed method's superiority in group fairness enhancement and fairness-utility tradeoff.
翻译:在优化关于隐性反馈数据的建议模式方面,通常采用对等学习战略,优化关于隐性反馈数据的建议模式,通常通过区分正面(即用户点击)和负面(即通过负面抽样获得)项目来学习用户偏好,但是,不同项目组的规模(按项目属性确定)通常分布不均。我们从经验中发现,对等算法(如BPR)通常使用的统一负面抽样战略可以继承这种数据偏差,并过度将多数项目组作为负面实例,严厉打击项目方的公平性。我们在本文件中建议采用公平适应性适应性消极抽样办法(FairNeg),通过在培训过程中适应性调整组级负面抽样分布来提高项目组的公平性。特别是,它首先认识到该模型在每个阶段的不公平性,然后调整群体间抽样分配的适应性动力更新战略,以更好地促进公平性优化。此外,还提议采用负面抽样分配机制,其中优雅地纳入用于开采信息化负面抽样的现有重要抽样技术(FairNere Neg),从而在培训过程中通过调整集团的负面性,从而实现多种优化性目标的公平性,从而显示我们提议的公共部门的扩大的数据试验。