Conventional graph neural networks (GNNs) are often confronted with fairness issues that may stem from their input, including node attributes and neighbors surrounding a node. While several recent approaches have been proposed to eliminate the bias rooted in sensitive attributes, they ignore the other key input of GNNs, namely the neighbors of a node, which can introduce bias since GNNs hinge on neighborhood structures to generate node representations. In particular, the varying neighborhood structures across nodes, manifesting themselves in drastically different node degrees, give rise to the diverse behaviors of nodes and biased outcomes. In this paper, we first define and generalize the degree bias using a generalized definition of node degree as a manifestation and quantification of different multi-hop structures around different nodes. To address the bias in the context of node classification, we propose a novel GNN framework called Generalized Degree Fairness-centric Graph Neural Network (Deg-FairGNN). Specifically, in each GNN layer, we employ a learnable debiasing function to generate debiasing contexts, which modulate the layer-wise neighborhood aggregation to eliminate the degree bias originating from the diverse degrees among nodes. Extensive experiments on three benchmark datasets demonstrate the effectiveness of our model on both accuracy and fairness metrics.
翻译:常规图形神经网络(GNNs)经常面临来自输入的公平问题,包括节点属性和节点周围的邻居。虽然最近提出了若干办法以消除敏感属性的偏差,但忽视了GNNs的其他关键投入,即节点的邻居,因为GNNs依赖邻里结构产生节点表示,因此可能会引入偏差。特别是,各节点的邻里结构各不相同,表现在截然不同的节点度和偏差结果上,产生了各种不同的偏差行为。在本文件中,我们首先使用节点程度的普遍定义来界定和概括程度偏差,作为不同节点周围不同多点结构的表示和量化。为了解决节点分类背景下的偏差,我们提议了一个新型的GNNN框架,称为 " 一般度公平度-中心型构造神经网络(Deg-FairGNN) " 。具体地说,在GNN层中,我们使用可学习的偏差功能来产生偏差环境。在本文中,我们首先使用节点的邻间缩缩缩图示,以消除从不同度标准度的偏差度实验的偏差度。在不同的标准度上显示的准确度上。