Benefiting from the injection of human prior knowledge, graphs, as derived discrete data, are semantically dense so that models can efficiently learn the semantic information from such data. Accordingly, graph neural networks (GNNs) indeed achieve impressive success in various fields. Revisiting the GNN learning paradigms, we discover that the relationship between human expertise and the knowledge modeled by GNNs still confuses researchers. To this end, we introduce motivating experiments and derive an empirical observation that the human expertise is gradually learned by the GNNs in general domains. By further observing the ramifications of introducing expertise logic into graph representation learning, we conclude that leading the GNNs to learn human expertise can improve the model performance. By exploring the intrinsic mechanism behind such observations, we elaborate the Structural Causal Model for the graph representation learning paradigm. Following the theoretical guidance, we innovatively introduce the auxiliary causal logic learning paradigm to improve the model to learn the expertise logic causally related to the graph representation learning task. In practice, the counterfactual technique is further performed to tackle the insufficient training issue during optimization. Plentiful experiments on the crafted and real-world domains support the consistent effectiveness of the proposed method.
翻译:通过注入人类先前的知识,从离散数据中得出的图表具有内在密度,使模型能够有效地从这些数据中学习语义信息。因此,图形神经网络(GNN)确实在各个领域取得了令人印象深刻的成功。我们重新研究GNN学习模式,发现人类专门知识与GNN所建知识之间的关系仍然使研究人员感到困惑。为此,我们引入激励性实验,并得出经验性观察,即GNN在一般领域逐步学习人类专门知识。通过进一步观察将专门知识逻辑引入图形代表学习过程的影响,我们得出结论认为,引导GNNN学习人类专门知识可以改进模型的性能。通过探索这些观察背后的内在机制,我们为图形代表学习模式制定了结构性Causal模型。根据理论指导,我们创新地引入了附带性因果关系逻辑学习模式,以改进与图形代表学习任务相关的专门知识逻辑因果关系。在实践中,通过进一步运用反事实性技术,解决在优化过程中培训不足的问题。在巧妙和现实世界领域进行的灵活实验支持了拟议方法的一贯有效性。