Large-scale Dynamic Networks (LDNs) are becoming increasingly important in the Internet age, yet the dynamic nature of these networks captures the evolution of the network structure and how edge weights change over time, posing unique challenges for data analysis and modeling. A Latent Factorization of Tensors (LFT) model facilitates efficient representation learning for a LDN. But the existing LFT models are almost based on Canonical Polyadic Factorization (CPF). Therefore, this work proposes a model based on Tensor Ring (TR) decomposition for efficient representation learning for a LDN. Specifically, we incorporate the principle of single latent factor-dependent, non-negative, and multiplicative update (SLF-NMU) into the TR decomposition model, and analyze the particular bias form of TR decomposition. Experimental studies on two real LDNs demonstrate that the propose method achieves higher accuracy than existing models.
翻译:大规模动态网络在互联网时代变得越来越重要,然而这些网络的动态性质捕捉了网络结构和边缘权重随时间变化的演变,为数据分析和建模带来了独特的挑战。一类称为张量潜在因子分解(LFT)模型能够为大规模动态网络实现有效的表示学习。但是,现有的LFT模型几乎都基于规范分解(CPF)方法。因此,本文提出了一种基于张量环分解的模型,用于实现大规模动态网络的有效表示学习。具体而言,我们将单个潜在因子相关的、非负的、乘法更新原则(SLF-NMU)纳入到张量环分解模型中,并分析了张量环分解的特定偏差形式。两个实际大规模动态网络的实验研究结果表明,本文所提出方法的精度高于现有的模型。