The prevalent fully-connected tensor network (FCTN) has achieved excellent success to compress data. However, the FCTN decomposition suffers from slow computational speed when facing higher-order and large-scale data. Naturally, there arises an interesting question: can a new model be proposed that decomposes the tensor into smaller ones and speeds up the computation of the algorithm? This work gives a positive answer by formulating a novel higher-order tensor decomposition model that utilizes latent matrices based on the tensor network structure, which can decompose a tensor into smaller-scale data than the FCTN decomposition, hence we named it Latent Matrices for Tensor Network Decomposition (LMTN). Furthermore, three optimization algorithms, LMTN-PAM, LMTN-SVD and LMTN-AR, have been developed and applied to the tensor-completion task. In addition, we provide proofs of theoretical convergence and complexity analysis for these algorithms. Experimental results show that our algorithm has the effectiveness in both deep learning dataset compression and higher-order tensor completion, and that our LMTN-SVD algorithm is 3-6 times faster than the FCTN-PAM algorithm and only a 1.8 points accuracy drop.
翻译:普遍存在的完全连通的电压网络(FCTN)在压缩数据方面取得了极好的成功。然而,FCTN分解在面对更高顺序和大规模数据时,计算速度缓慢,因此产生一个令人感兴趣的问题:能否提出一个新的模型,将电压分解成较小体,并加速算法的计算?这项工作提供了积极的答案,它开发了一个新型的更高层次的电压分解模型,该模型利用基于电压网络结构的潜在矩阵进行理论趋同和复杂分析,该模型可以将一个电压分解成小于FCTN分解的较小数据,因此我们命名它为Tensor网络分解(LMTN)的Ltent Matricles。此外,已经开发了三种优化算法、LMTN-PAM、LMTN-S和LMTN-AR的计算方法, 并且我们的LMTMS-MS的精确度算法比高。此外,我们的AR-MTR-MS的精确度算算法也比降时快。