Anomaly detection from graph data has drawn much attention due to its practical significance in many critical applications including cybersecurity, finance, and social networks. Existing data mining and machine learning methods are either shallow methods that could not effectively capture the complex interdependency of graph data or graph autoencoder methods that could not fully exploit the contextual information as supervision signals for effective anomaly detection. To overcome these challenges, in this paper, we propose a novel method, Self-Supervised Learning for Graph Anomaly Detection (SL-GAD). Our method constructs different contextual subgraphs (views) based on a target node and employs two modules, generative attribute regression and multi-view contrastive learning for anomaly detection. While the generative attribute regression module allows us to capture the anomalies in the attribute space, the multi-view contrastive learning module can exploit richer structure information from multiple subgraphs, thus abling to capture the anomalies in the structure space, mixing of structure, and attribute information. We conduct extensive experiments on six benchmark datasets and the results demonstrate that our method outperforms state-of-the-art methods by a large margin.
翻译:现有数据挖掘和机器学习方法不是浅浅方法,无法有效捕捉图形数据或图形自动编码方法的复杂相互依存性,无法充分利用背景信息作为监督信号,以有效检测异常现象。为了克服这些挑战,我们在本文件中提出了一个新颖方法,即“图异常检测自监视学习”(SL-GAD) 。我们的方法基于目标节点构建了不同的背景子图(视图),并使用两个模块,即基因属性回归和多视角对比学习,以探测异常现象。虽然基因属性回归模块使我们能够捕捉属性空间中的异常现象,但多视角对比学习模块可以利用多个子图的较丰富的结构信息,从而可以捕捉结构空间中的异常现象,混合结构,并归因信息。我们对6个基准数据集进行了广泛的实验,结果显示我们的方法大大超越了优势区域中的状态方法。