This work studies the problem of high-dimensional data (referred to tensors) completion from partially observed samplings. We consider that a tensor is a superposition of multiple low-rank components. In particular, each component can be represented as multilinear connections over several latent factors and naturally mapped to a specific tensor network (TN) topology. In this paper, we propose a fundamental tensor decomposition (TD) framework: Multi-Tensor Network Representation (MTNR), which can be regarded as a linear combination of a range of TD models, e.g., CANDECOMP/PARAFAC (CP) decomposition, Tensor Train (TT), and Tensor Ring (TR). Specifically, MTNR represents a high-order tensor as the addition of multiple TN models, and the topology of each TN is automatically generated instead of manually pre-designed. For the optimization phase, an adaptive topology learning (ATL) algorithm is presented to obtain latent factors of each TN based on a rank incremental strategy and a projection error measurement strategy. In addition, we theoretically establish the fundamental multilinear operations for the tensors with TN representation, and reveal the structural transformation of MTNR to a single TN. Finally, MTNR is applied to a typical task, tensor completion, and two effective algorithms are proposed for the exact recovery of incomplete data based on the Alternating Least Squares (ALS) scheme and Alternating Direction Method of Multiplier (ADMM) framework. Extensive numerical experiments on synthetic data and real-world datasets demonstrate the effectiveness of MTNR compared with the start-of-the-art methods.
翻译:这项工作研究部分观测到的抽样中高维数据(指高压数据)的完成问题。我们认为,高压是多个低级组件的叠加。特别是,每个组件可以作为多个潜在因素的多线性连接表示,并自然绘制成特定的高压网络(TN)地形学。在本文中,我们提出了一个基本的高压分解(TD)框架:多传感器网络代表(MTNNR),它可以被视为一系列TD模型的线性组合,例如,CANDECOMP/PARAFAC(CP)分解、Tensor 火车(TT)和Tensor Rring(TR)。具体地说,MNNR代表了多个潜在多线性信号,每个TNR的表表是自动生成的,而不是人工预设计的。对于优化阶段,提出了适应性表解(ATL)算法,以便根据级别递增战略和预测错误测量战略,获得每个TNR(我们从理论上确定一个基础的多线性多线性操作操作,对SLLLLL的典型的平流性平流数据转换为LMTL的系统, 和SLMLLLLLLLLL的系统, IM的正常数据结构图,对数字的模拟的模拟的恢复进行两次的模拟的模拟的模拟的模拟的模拟的模拟,对数字的模拟的模拟的模拟的模拟的模拟的模拟的模拟的模拟的模拟的模拟的模拟的升级和测算法是向。