Knowledge graph data are prevalent in real-world applications, and knowledge graph neural networks (KGNNs) are essential techniques for knowledge graph representation learning. Although KGNN effectively models the structural information from knowledge graphs, these frameworks amplify the underlying data bias that leads to discrimination towards certain groups or individuals in resulting applications. Additionally, as existing debiasing approaches mainly focus on the entity-wise bias, eliminating the multi-hop relational bias that pervasively exists in knowledge graphs remains an open question. However, it is very challenging to eliminate relational bias due to the sparsity of the paths that generate the bias and the non-linear proximity structure of knowledge graphs. To tackle the challenges, we propose Fair-KGNN, a KGNN framework that simultaneously alleviates multi-hop bias and preserves the proximity information of entity-to-relation in knowledge graphs. The proposed framework is generalizable to mitigate the relational bias for all types of KGNN. We develop two instances of Fair-KGNN incorporating with two state-of-the-art KGNN models, RGCN and CompGCN, to mitigate gender-occupation and nationality-salary bias. The experiments carried out on three benchmark knowledge graph datasets demonstrate that the Fair-KGNN can effectively mitigate unfair situations during representation learning while preserving the predictive performance of KGNN models.
翻译:知识图表数据在现实世界应用中很普遍,知识图神经网络(KGNNs)是知识图说明学习的基本技术。虽然KGNNN有效地模拟了知识图的结构信息,但这些框架扩大了基础数据偏差,导致在产生应用过程中对某些群体或个人的歧视。此外,现有的偏差方法主要侧重于实体偏差,消除了知识图中普遍存在的多点偏差关系偏差仍然是一个未决问题。然而,由于产生偏差和非线性知识图的近距离结构的路径过于宽广,消除这种偏差非常具有挑战性。为了应对挑战,我们提议Fair-KGNNN(一个KGNNN)框架,即一个同时减轻多点偏差的偏差并保存知识图中实体对关系信息的邻近性。拟议的框架可以普遍地减轻所有类型知识图中普遍存在的多点关系偏差。我们开发了两个“公平-KGNNNN”(F)实例,与两个最先进的KGNNNN模型、RGCN和ComGCN(ComGCN)相结合。我们建议公平-KGNNNNN(F),一个框架框架框架框架可以有效减少性别偏差的模型,以缓解对等模型的偏差的实验,同时展示。