Graph neural network (GNN) based methods have saturated the field of recommender systems. The gains of these systems have been significant, showing the advantages of interpreting data through a network structure. However, despite the noticeable benefits of using graph structures in recommendation tasks, this representational form has also bred new challenges which exacerbate the complexity of mitigating algorithmic bias. When GNNs are integrated into downstream tasks, such as recommendation, bias mitigation can become even more difficult. Furthermore, the intractability of applying existing methods of fairness promotion to large, real world datasets places even more serious constraints on mitigation attempts. Our work sets out to fill in this gap by taking an existing method for promoting individual fairness on graphs and extending it to support mini-batch, or sub-sample based, training of a GNN, thus laying the groundwork for applying this method to a downstream recommendation task. We evaluate two popular GNN methods: Graph Convolutional Network (GCN), which trains on the entire graph, and GraphSAGE, which uses probabilistic random walks to create subgraphs for mini-batch training, and assess the effects of sub-sampling on individual fairness. We implement an individual fairness notion called \textit{REDRESS}, proposed by Dong et al., which uses rank optimization to learn individual fair node, or item, embeddings. We empirically show on two real world datasets that GraphSAGE is able to achieve, not just, comparable accuracy, but also, improved fairness as compared with the GCN model. These finding have consequential ramifications to individual fairness promotion, GNNs, and in downstream form, recommender systems, showing that mini-batch training facilitate individual fairness promotion by allowing for local nuance to guide the process of fairness promotion in representation learning.
翻译:以图表神经网络(GNN)为基础的方法饱和了推荐者系统的公平性。这些系统的收益是显著的,显示了通过网络结构解释数据的好处。然而,尽管在建议任务中使用图形结构的明显好处,这种代表形式也带来了新的挑战,加剧了减轻算法偏差的复杂性。当GNN被纳入下游任务时,例如建议,减少偏差可能变得更加困难。此外,将现有的公平促进方法应用于大型、真实的世界数据集的吸引力使得减缓努力受到甚至更为严重的限制。我们的工作是为了填补这一差距,采用一种现有的方法,促进个人在图表上实现公平性,将其扩展到支持一个小批数结构结构或基于子样本的培训,从而为将这一方法应用于下游建议任务打下游任务打下基础。我们评价两种受欢迎的GNNNN方法:图变动网络(GCN),它用整个图表来培训整个图表,以及图形SageSAGAGAGAG,它使用不稳定的随机行来创建子图,但让个人能够进行微调的精度培训,我们建议了内部的公平性GRA, 也用一个小的直径系统来学习。我们用一个小智能系统来显示个人推的推的推理学的推理学。我们个人推理学的推理学。我们个人推算的推算的推算的推算。我们个人推算。我们个人推了个人推算。