Graphs can facilitate modeling various complex systems such as gene networks and power grids, as well as analyzing the underlying relations within them. Learning over graphs has recently attracted increasing attention, particularly graph neural network-based (GNN) solutions, among which graph attention networks (GATs) have become one of the most widely utilized neural network structures for graph-based tasks. Although it is shown that the use of graph structures in learning results in the amplification of algorithmic bias, the influence of the attention design in GATs on algorithmic bias has not been investigated. Motivated by this, the present study first carries out a theoretical analysis in order to demonstrate the sources of algorithmic bias in GAT-based learning for node classification. Then, a novel algorithm, FairGAT, that leverages a fairness-aware attention design is developed based on the theoretical findings. Experimental results on real-world networks demonstrate that FairGAT improves group fairness measures while also providing comparable utility to the fairness-aware baselines for node classification and link prediction.
翻译:图表可以方便地建模各种复杂系统,如基因网络和电网,以及分析其中的潜在关系。最近,学习图形已经引起了越来越多的关注,特别是基于图形神经网络的解决方案,在其中图注意力网络(GATs)已成为最广泛使用的用于基于图形任务的神经网络结构之一。尽管已经表明,在学习中使用图形结构会导致算法偏差的放大,但没有调查注意力设计在GATs中对算法偏差的影响。在这种情况下的驱动下,本研究首先进行理论分析,以演示GATs-based学习在节点分类中算法偏差的来源。接着,基于理论发现,开发了一种新算法FairGAT,该算法利用公平感知注意力设计。在真实世界网络上的实验结果表明,FairGAT在提高组公平性指标的同时,为节点分类和链路预测提供了与公平感知基线相当的实用性。