The Tensor-Train (TT) format is a highly compact low-rank representation for high-dimensional tensors. TT is particularly useful when representing approximations to the solutions of certain types of parametrized partial differential equations. For many of these problems, computing the solution explicitly would require an infeasible amount of memory and computational time. While the TT format makes these problems tractable, iterative techniques for solving the PDEs must be adapted to perform arithmetic while maintaining the implicit structure. The fundamental operation used to maintain feasible memory and computational time is called rounding, which truncates the internal ranks of a tensor already in TT format. We propose several randomized algorithms for this task that are generalizations of randomized low-rank matrix approximation algorithms and provide significant reduction in computation compared to deterministic TT-rounding algorithms. Randomization is particularly effective in the case of rounding a sum of TT-tensors (where we observe 20x speedup), which is the bottleneck computation in the adaptation of GMRES to vectors in TT format. We present the randomized algorithms and compare their empirical accuracy and computational time with deterministic alternatives.
翻译:Tensor-Train (TT) 格式是高维抗体的高度紧凑的低空代表。 TT在代表某些类型的超光化部分偏差方程解决方案的近似值时特别有用。 对于其中的许多问题, 计算解决方案明确需要不可行的内存和计算时间。 虽然 TT 格式使这些问题具有可移动性, 解决 PDE 的迭代技术必须适应于计算功能, 并同时保持隐含结构。 维持可行的内存和计算时间的基本操作被称为四舍五入, 以TT格式将一个阵列内部的阵列排解开来。 我们为这一任务提出了几种随机化的算法, 即随机化低级矩阵近似算法的概括化, 与确定性TT环绕算法相比, 计算法的大幅缩减。 随机化方法在将TT- tens 组合( 我们观察 20x 速度) 的情况下特别有效。 用于维持可行的内置操作, 就是将GRES 调整到 TT 格式的矢量替代品的瓶式计算。 我们提出随机化算法和比较其实验性计算方法。