Graph Neural Networks (GNNs) have recently demonstrated superior capability of tackling graph analytical problems in various applications. Nevertheless, with the wide-spreading practice of GNNs in high-stake decision-making processes, there is an increasing societal concern that GNNs could make discriminatory decisions that may be illegal towards certain demographic groups. Although some explorations have been made towards developing fair GNNs, existing approaches are tailored for a specific GNN model. However, in practical scenarios, myriads of GNN variants have been proposed for different tasks, and it is costly to train and fine-tune existing debiasing models for different GNNs. Also, bias in a trained model could originate from training data, while how to mitigate bias in the graph data is usually overlooked. In this work, different from existing work, we first propose novel definitions and metrics to measure the bias in an attributed network, which leads to the optimization objective to mitigate bias. Based on the optimization objective, we develop a framework named EDITS to mitigate the bias in attributed networks while preserving useful information. EDITS works in a model-agnostic manner, which means that it is independent of the specific GNNs applied for downstream tasks. Extensive experiments on both synthetic and real-world datasets demonstrate the validity of the proposed bias metrics and the superiority of EDITS on both bias mitigation and utility maintenance. Open-source implementation: https://github.com/yushundong/EDITS.
翻译:最近,由于GNN在高层决策过程中广泛采用的做法,社会日益关注GNN可能对某些人口群体作出可能非法的歧视性决定,尽管已经为建立公平的GNN网络进行了一些探索,但现有办法是针对特定GNN模式而设计的。然而,在实际假设中,为不同任务提出了各种各样的GNN变式,培训和微调现有不同GNNN的偏差模式的成本很高。此外,培训模式中的偏向可能源于培训数据,而减少图表数据中的偏向通常被忽视。在这项工作中,我们首先提出了新的定义和衡量标准,以衡量可归属网络中的偏向,从而实现最佳目标,减轻偏向。在优化目标的基础上,我们制定了名为EDITS的框架,以减少被归咎网络中的偏差,同时保存有用的信息。EDITS以模型-agnovic方式工作,培训模式中的偏向可能源于培训数据,而如何减少图表数据中的偏向性通常被忽视。在这项工作中,与现有工作不同,我们首先提出新的定义和衡量可归属网络中的偏向性,从而优化地减少对数据库/IMFIFI/IMI的稳定性进行独立的试验。