Graph Neural Networks (GNNs) have proven to excel in predictive modeling tasks where the underlying data is a graph. However, as GNNs are extensively used in human-centered applications, the issue of fairness has arisen. While edge deletion is a common method used to promote fairness in GNNs, it fails to consider when data is inherently missing fair connections. In this work we consider the unexplored method of edge addition, accompanied by deletion, to promote fairness. We propose two model-agnostic algorithms to perform edge editing: a brute force approach and a continuous approximation approach, FairEdit. FairEdit performs efficient edge editing by leveraging gradient information of a fairness loss to find edges that improve fairness. We find that FairEdit outperforms standard training for many data sets and GNN methods, while performing comparably to many state-of-the-art methods, demonstrating FairEdit's ability to improve fairness across many domains and models.
翻译:Georgas Neural Networks (GNNs) 已证明在基本数据为图表的预测模型任务中非常出色。 但是,由于GNNs在以人为中心的应用中被广泛使用,因此出现了公平问题。 边缘删除是促进GNNs公平的一个常见方法,但它没有考虑数据在何时内在缺失公平连接。 在这项工作中,我们考虑了未探索的边添加法,并同时删除,以促进公平。 我们提出了两种模型-不可知性算法来进行边端编辑:一种是粗力法,一种是连续近似法,FairEdit。 FairEdit通过利用公平损失的梯度信息进行高效的边端编辑,以找到改善公平性的边缘。 我们发现 FairEdit 超越了许多数据集和GNN方法的标准培训,同时与许多最先进的方法进行了可比较,展示了FairEdit在很多领域和模型中提高公平性的能力。