In this paper, we propose new learning algorithms for approximating high-dimensional functions using tree tensor networks in a least-squares setting. Given a dimension tree or architecture of the tensor network, we provide an algorithm that generates a sequence of nested tensor subspaces based on a generalization of principal component analysis for multivariate functions. An optimal least-squares method is used for computing projections onto the generated tensor subspaces, using samples generated from a distribution depending on the previously generated subspaces. We provide an error bound in expectation for the obtained approximation. Practical strategies are proposed for adapting the feature spaces and ranks to achieve a prescribed error. Also, we propose an algorithm that progressively constructs the dimension tree by suitable pairings of variables, that allows to further reduce the number of samples necessary to reach that error. Numerical examples illustrate the performance of the proposed algorithms and show that stable approximations are obtained with a number of samples close to the number of free parameters of the estimated tensor networks.
翻译:在本文中, 我们提出新的学习算法, 以在最小方形设置中使用树压网络来接近高维功能。 在有维度树或强度网络结构的情况下, 我们提供一种算法, 产生嵌套的强度子空间序列, 其依据是多变量函数主要元件分析的概括化。 一种最优的最小方程方法, 将预测计算到生成的抗拉子空间上, 使用根据先前生成的子空间的分布生成的样本计算出来。 我们给出了一个错误, 以预期获得的近似值为约束值。 提议了一个实际策略, 以调整地貌空间和等级, 以达到一个指定错误。 此外, 我们提出一种算法, 通过合适的变量配对逐步构建维度树, 从而进一步减少到达该错误所需的样本数量。 数字示例说明了拟议算法的性能, 并显示, 稳定近似近似近似近似值的样本数量接近估计的索诺尔网络自由参数 。