Graph anomaly detection (GAD) is a vital task in graph-based machine learning and has been widely applied in many real-world applications. The primary goal of GAD is to capture anomalous nodes from graph datasets, which evidently deviate from the majority of nodes. Recent methods have paid attention to various scales of contrastive strategies for GAD, i.e., node-subgraph and node-node contrasts. However, they neglect the subgraph-subgraph comparison information which the normal and abnormal subgraph pairs behave differently in terms of embeddings and structures in GAD, resulting in sub-optimal task performance. In this paper, we fulfill the above idea in the proposed multi-view multi-scale contrastive learning framework with subgraph-subgraph contrast for the first practice. To be specific, we regard the original input graph as the first view and generate the second view by graph augmentation with edge modifications. With the guidance of maximizing the similarity of the subgraph pairs, the proposed subgraph-subgraph contrast contributes to more robust subgraph embeddings despite of the structure variation. Moreover, the introduced subgraph-subgraph contrast cooperates well with the widely-adopted node-subgraph and node-node contrastive counterparts for mutual GAD performance promotions. Besides, we also conduct sufficient experiments to investigate the impact of different graph augmentation approaches on detection performance. The comprehensive experimental results well demonstrate the superiority of our method compared with the state-of-the-art approaches and the effectiveness of the multi-view subgraph pair contrastive strategy for the GAD task.
翻译:GAD的首要目标是从与大多数节点明显不同的图表数据集中获取异常节点。最近的方法已经注意到了GAD的不同对比战略规模,即节线下表和节点节点对比。然而,它们忽略了下表比较信息,而下表比较信息是正常和异常的子阵对口在GAD的嵌入和结构方面表现不同,导致次优任务性能。在本文件中,我们在拟议的多视图多尺度对比学习框架中实现上述想法,与第一个做法的次表分层对比。具体地说,我们把原始输入图视为第一个视图,通过边幅放大生成第二个视图。在指导如何尽可能扩大子阵形对口的相似性时,拟议的子图比较比较有助于更稳健的子阵列,尽管结构比较了次优性任务性业绩业绩表现。此外,我们引入了跨视图的跨层对比方法,并且没有进行跨面的跨面的跨级任务对比。我们进行了跨级的跨面的跨级任务对比。此外,我们引入了跨级的跨度的跨面测试方法,我们进行了跨级的跨级的跨度对比。