Anomaly detection on attributed networks is widely used in online shopping, financial transactions, communication networks, and so on. However, most existing works trying to detect anomalies on attributed networks only consider a single kind of interaction, so they cannot deal with various kinds of interactions on multi-view attributed networks. It remains a challenging task to jointly consider all different kinds of interactions and detect anomalous instances on multi-view attributed networks. In this paper, we propose a graph convolution-based framework, named AnomMAN, to detect Anomaly on Multi-view Attributed Networks. To jointly consider attributes and all kinds of interactions on multi-view attributed networks, we use the attention mechanism to define the importance of all views in networks. Since the low-pass characteristic of graph convolution operation filters out most high-frequency signals (aonmaly signals), it cannot be directly applied to anomaly detection tasks. AnomMAN introduces the graph auto-encoder module to turn the disadvantage of low-pass features into an advantage. According to experiments on real-world datasets, AnomMAN outperforms the state-of-the-art models and two variants of our proposed model.
翻译:摘要:属性网络上的异常检测被广泛应用于在线购物、金融交易、通信网络等领域。然而,大多数现有的基于属性网络的异常检测方法只考虑单一交互类型,因此无法处理多视图属性网络上的各种不同类型的交互。联合考虑所有不同类型的交互并检测多视图属性网络上的异常实例仍然是一项具有挑战性的任务。本文提出了一种基于图卷积的框架—AnomMAN,用于检测多视图属性网络上的异常。为了联合考虑属性和多视图属性网络上的所有交互,我们使用注意力机制来定义网络中所有视图的重要性。由于图卷积运算的低通性质会滤除大部分高频信号(异常信号),因此无法直接用于异常检测任务。AnomMAN引入了图自编码器模块,将低通特征的劣势转化为优势。根据对真实数据集的实验,AnomMAN优于现有最先进的模型和我们提出的两个变体模型。