Graph Neural Networks (GNNs) have shown great power in learning node representations on graphs. However, they may inherit historical prejudices from training data, leading to discriminatory bias in predictions. Although some work has developed fair GNNs, most of them directly borrow fair representation learning techniques from non-graph domains without considering the potential problem of sensitive attribute leakage caused by feature propagation in GNNs. However, we empirically observe that feature propagation could vary the correlation of previously innocuous non-sensitive features to the sensitive ones. This can be viewed as a leakage of sensitive information which could further exacerbate discrimination in predictions. Thus, we design two feature masking strategies according to feature correlations to highlight the importance of considering feature propagation and correlation variation in alleviating discrimination. Motivated by our analysis, we propose Fair View Graph Neural Network (FairVGNN) to generate fair views of features by automatically identifying and masking sensitive-correlated features considering correlation variation after feature propagation. Given the learned fair views, we adaptively clamp weights of the encoder to avoid using sensitive-related features. Experiments on real-world datasets demonstrate that FairVGNN enjoys a better trade-off between model utility and fairness. Our code is publicly available at https://github.com/YuWVandy/FairVGNN.
翻译:虽然有些工作开发了公平的GNN, 其中大部分直接从非绘图领域借用了公平的代表性学习技术,而没有考虑到GNNS中地貌传播造成的敏感属性渗漏的潜在问题。然而,我们从经验中观察到,地貌传播可能会改变先前不显眼的非敏感特征与敏感特征的关联性。这可被视为敏感信息的渗漏,可能进一步加剧预测中的歧视。因此,我们根据地貌相关性设计了两种特征掩蔽战略,以突出考虑特征传播和关联性变化在减少歧视方面的重要性。我们的分析激励着Fair VGNN网络(FairVNNNNN),通过自动识别和遮盖敏感相关特征,考虑到地貌传播之后的关联性变化,产生对特征的公平看法。根据所了解的公平性观点,我们根据地调适量地将编码加固。我们在现实世界数据库/FairVNUS 之间的实验显示,我们的数据库/FirealVNUS在公共数据库中具有更好的公平性。