Graph representation learning models have been deployed for making decisions in multiple high-stakes scenarios. It is therefore critical to ensure that these models are fair. Prior research has shown that graph neural networks can inherit and reinforce the bias present in graph data. Researchers have begun to examine ways to mitigate the bias in such models. However, existing efforts are restricted by their inefficiency, limited applicability, and the constraints they place on sensitive attributes. To address these issues, we present FairMILE a general framework for fair and scalable graph representation learning. FairMILE is a multi-level framework that allows contemporary unsupervised graph embedding methods to scale to large graphs in an agnostic manner. FairMILE learns both fair and high-quality node embeddings where the fairness constraints are incorporated in each phase of the framework. Our experiments across two distinct tasks demonstrate that FairMILE can learn node representations that often achieve superior fairness scores and high downstream performance while significantly outperforming all the baselines in terms of efficiency.
翻译:图表代表学习模型已被运用,用于在多个高风险情况下作出决定,因此,确保这些模型公平至关重要。先前的研究显示,图形神经网络可以继承并强化图表数据中存在的偏差。研究人员已开始研究如何减少这些模型中的偏差。然而,现有的努力受到效率低下、适用性有限及其对敏感属性的制约的限制。为解决这些问题,我们向FairMILE展示了一个公平和可扩缩的图形代表学习总框架。FairmMILE是一个多层次框架,允许当代未经监督的图形嵌入方法以不可知的方式向大图表进行缩放。FairMILE学会了公平制约纳入框架每个阶段的公平和高品质节点。我们在两个不同任务中进行的实验表明,FairMILE可以学到一些不明显的表述,这些表述往往达到更高的公平分数和高下游性表现,同时大大超过效率方面的所有基线。