As the representations output by Graph Neural Networks (GNNs) are increasingly employed in real-world applications, it becomes important to ensure that these representations are fair and stable. In this work, we establish a key connection between counterfactual fairness and stability and leverage it to propose a novel framework, NIFTY (uNIfying Fairness and stabiliTY), which can be used with any GNN to learn fair and stable representations. We introduce a novel objective function that simultaneously accounts for fairness and stability and develop a layer-wise weight normalization using the Lipschitz constant to enhance neural message passing in GNNs. In doing so, we enforce fairness and stability both in the objective function as well as in the GNN architecture. Further, we show theoretically that our layer-wise weight normalization promotes counterfactual fairness and stability in the resulting representations. We introduce three new graph datasets comprising of high-stakes decisions in criminal justice and financial lending domains. Extensive experimentation with the above datasets demonstrates the efficacy of our framework.
翻译:由于图形神经网络(GNNs)在现实世界应用中越来越多地采用代表产出,因此必须确保这些代表是公平和稳定的。在这项工作中,我们在反事实公正和稳定之间建立起关键联系,并利用它提出一个新的框架,即NIFTY(Unifying Fairnity and sability),与任何GNN一起学习公平和稳定的代表。我们引入了一个新的客观功能,既考虑公平和稳定,又利用Lipschitz常数发展一个分层加权正常化,以加强GNNs传递的神经信息。我们这样做是为了在目标功能和GNN结构中实施公正和稳定。此外,我们从理论上表明,我们分层的权重正常化促进了由此产生的代表的反事实公正和稳定。我们引入了三个新的图表数据集,其中包括刑事司法和金融贷款领域的高决策。对上述数据集的广泛实验显示了我们框架的功效。