Network Embedding aims to learn a function mapping the nodes to Euclidean space contribute to multiple learning analysis tasks on networks. However, the noisy information behind the real-world networks and the overfitting problem both negatively impact the quality of embedding vectors. To tackle these problems, researchers utilize Adversarial Training for Network Embedding (AdvTNE) and achieve state-of-the-art performance. Unlike the mainstream methods introducing perturbations on the network structure or the data feature, AdvTNE directly perturbs the model parameters, which provides a new chance to understand the mechanism behind. In this paper, we explain AdvTNE theoretically from an optimization perspective. Considering the Power-law property of networks and the optimization objective, we analyze the reason for its excellent results. According to the above analysis, we propose a new activation to enhance the performance of AdvTNE. We conduct extensive experiments on four real networks to validate the effectiveness of our method in node classification and link prediction. The results demonstrate that our method is superior to the state-of-the-art baseline methods.
翻译:网络嵌入的目的是学习如何将节点映射成欧clidean空间的功能,有助于在网络上开展多种学习分析任务;然而,真实世界网络背后的吵闹信息以及过于适应的问题,都对嵌入矢量的质量产生不利影响;为解决这些问题,研究人员利用网络嵌入的反向培训(AdvTNE),并实现最先进的性能;与对网络结构或数据特征进行干扰的主流方法不同,AdvTNE直接触摸模型参数,为了解后方机制提供了新的机会;在本文件中,我们从优化的角度从理论上解释AdvTNE。考虑到网络的权力-法律属性和优化目标,我们分析了其优异结果的原因。根据上述分析,我们提议采用新的激活方式来提高AdvTNE的性能。我们对四个真实网络进行了广泛的实验,以验证我们方法在节点分类和连接预测方面的有效性。结果表明,我们的方法优于最先进的基线方法。