There are several factorizations of multi-dimensional tensors into lower-dimensional components, known as `tensor networks'. We consider the popular `tensor-train' (TT) format and ask, how efficiently can we compute a low-rank approximation from a full tensor on current multi-core CPUs. Compared to sparse and dense linear algebra, there are much fewer and less extensive well-optimized kernel libraries for multi-linear algebra. Linear algebra libraries like BLAS and LAPACK may provide the required operations in principle, but often at the cost of additional data movements for rearranging memory layouts. Furthermore, these libraries are typically optimized for the compute-bound case (e.g.\ square matrix operations) whereas low-rank tensor decompositions lead to memory bandwidth limited operations. We propose a `tensor-train singular value decomposition' (TT-SVD) algorithm based on two building blocks: a `Q-less tall-skinny QR' factorization, and a fused tall-skinny matrix-matrix multiplication and reshape operation. We analyze the performance of the resulting TT-SVD algorithm using the Roofline performance model. In addition, we present performance results for different algorithmic variants for shared-memory as well as distributed-memory architectures. Our experiments show that commonly used TT-SVD implementations suffer severe performance penalties. We conclude that a dedicated library for tensor factorization kernels would benefit the community: Computing a low-rank approximation can be as cheap as reading the data twice from main memory. As a consequence, an implementation that achieves realistic performance will move the limit at which one has to resort to randomized methods that only process part of the data.
翻译:将多维电解器的多维偏振成低维组件, 称为“ tensor 网络 ” 。 我们考虑流行的“ tensor- train” (TT) 格式, 并询问, 在当前多核心 CPU 上, 我们如何高效地从完全的 straor 计算低端近离子。 与稀疏和稠密的线性线性代数相比, 多线性代数的多维度内存库库数量少得多, 范围也少得多。 BLAS 和 LAPACK 等线性能代数库可能原则上提供所需的操作, 但往往以重新排列内存布局所需的额外数据移动成本为代价。 此外, 这些图书馆通常会优化对当前多极性能的偏差近度近光度近光度近光度近光度近光度近光度近光度, 而低等离子值的离子值解算算算法只能作为两个构件块( Q- load- QR ) 读取一个不高的离子值自动递解解解算法, 而后, IM IM 运行主机运行运行运行结果的运行运行运行运行运行结果会显示,, 作为我们使用的轨变压性能的轨算, 的轨算, 的轨算, 以我们电算法的轨算的轨算结果的轨算结果的轨算结果的轨算, 。