Graph Neural Networks (GNNs) have made rapid developments in the recent years. Due to their great ability in modeling graph-structured data, GNNs are vastly used in various applications, including high-stakes scenarios such as financial analysis, traffic predictions, and drug discovery. Despite their great potential in benefiting humans in the real world, recent study shows that GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data and lack interpretability, which have risk of causing unintentional harm to the users and society. For example, existing works demonstrate that attackers can fool the GNNs to give the outcome they desire with unnoticeable perturbation on training graph. GNNs trained on social networks may embed the discrimination in their decision process, strengthening the undesirable societal bias. Consequently, trustworthy GNNs in various aspects are emerging to prevent the harm from GNN models and increase the users' trust in GNNs. In this paper, we give a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability. For each aspect, we give the taxonomy of the related methods and formulate the general frameworks for the multiple categories of trustworthy GNNs. We also discuss the future research directions of each aspect and connections between these aspects to help achieve trustworthiness.
翻译:近些年来,神经网络(GNNs)取得了迅速的发展。由于其在模拟图表结构数据方面的巨大能力,GNNs在各种应用中被广泛使用,包括金融分析、交通预测和药物发现等高取量情景。尽管在造福现实世界中的人类方面潜力巨大,但最近的研究表明,GNNs可以泄露私人信息,易受对抗性攻击,能够继承和放大来自培训数据的社会偏见,缺乏解释性,这有可能对用户和社会造成无意的伤害。例如,现有工作表明,攻击者可以欺骗GNNs,在培训图上以不可察觉的渗透方式给出他们想要的结果。受过社会网络培训的GNNs可能会将歧视嵌入其决策过程,加强不良的社会偏见。因此,在各方面值得信赖的GNNs在防止GN模式伤害和增加用户对GNNs的信任方面出现。在计算隐私、稳健、公平性和可解释性方面对GNNs进行全面调查。我们从每个方面来讨论与G相关的研究框架的多重性,我们从这些方面来探讨G的可靠研究方向。