The low multilinear rank approximation, also known as the truncated Tucker decomposition, has been extensively utilized in many applications that involve higher-order tensors. Popular methods for low multilinear rank approximation usually rely directly on matrix SVD, therefore often suffer from the notorious intermediate data explosion issue and are not easy to parallelize, especially when the input tensor is large. In this paper, we propose a new class of truncated HOSVD algorithms based on alternating least squares (ALS) for efficiently computing the low multilinear rank approximation of tensors. The proposed ALS-based approaches are able to eliminate the redundant computations of the singular vectors of intermediate matrices and are therefore free of data explosion. Also, the new methods are more flexible with adjustable convergence tolerance and are intrinsically parallelizable on high-performance computers. Theoretical analysis reveals that the ALS iteration in the proposed algorithms is q-linear convergent with a relatively wide convergence region. Numerical experiments with large-scale tensors from both synthetic and real-world applications demonstrate that ALS-based methods can substantially reduce the total cost of the original ones and are highly scalable for parallel computing.
翻译:多线性级近似值低的多线性高,也称为“塔克脱形脱形”,在许多应用中广泛使用,这些应用涉及较高级的抗拉,低线性低级近近近的流行方法通常直接依赖矩阵SVD,因此往往受到臭名昭著的中间数据爆炸问题的影响,因此不容易平行化,特别是当输入高压大的时候。在本文件中,我们提议了一个新的一类短线性HOSVD算法,它以交替最小方(ALS)为基础,有效计算低级多线性压近差(ALS),拟议的ASLS方法能够消除中间基质的单向矢量的重复计算,因此没有数据爆炸。此外,新方法在可调整的趋同性容度上比较灵活,在高性能计算机上也具有内在的平行性。理论分析表明,拟议算法中的ALS的反射线性演算与相对广泛的趋同区域相趋同。从合成和现实应用中大型的加压实验表明,以ALS为基础的方法可以大幅度降低原体和高度平行计算的总成本。