Adversarial training is an approach for increasing model's resilience against adversarial perturbations. Such approaches have been demonstrated to result in models with feature representations that generalize better. However, limited works have been done on adversarial training of models on graph data. In this paper, we raise such a question { does adversarial training improve the generalization of graph representations. We formulate L2 and L1 versions of adversarial training in two powerful node embedding methods: graph autoencoder (GAE) and variational graph autoencoder (VGAE). We conduct extensive experiments on three main applications, i.e. link prediction, node clustering, graph anomaly detection of GAE and VGAE, and demonstrate that both L2 and L1 adversarial training boost the generalization of GAE and VGAE.
翻译:反向培训是提高模型抵御对抗性扰动能力的一种方法,已经证明这些方法能够产生具有更概括性特征的模型,然而,在对图表数据模型进行对抗性培训方面所做的工作有限,在本文件中,我们提出这样一个问题{对抗性培训是否改善了图示的概括性;我们用两种强大的节点嵌入方法,即图式自动编码器(GAE)和变式图式自动电算器(VGAE)编制对抗性培训的L2和L1版本;我们在三种主要应用上进行了广泛的实验,即连接预测、节点组合、GAE和VGAE的图形异常探测,并表明L2和L1对抗性培训都促进了GAE和VGAE的普及性。