Deep neural networks have attracted the attention of the machine learning community because of their appealing data-driven framework and of their performance in several pattern recognition tasks. On the other hand, there are many open theoretical problems regarding the internal operation of the network, the necessity of certain layers, hyperparameter selection etc. A promising strategy is based on tensor networks, which have been very successful in physical and chemical applications. In general, higher-order tensors are decomposed into sparsely interconnected lower-order tensors. This is a numerically reliable way to avoid the curse of dimensionality and to provide highly compressed representation of a data tensor, besides the good numerical properties that allow to control the desired accuracy of approximation. In order to compare tensor and neural networks, we first consider the identification of the classical Multilayer Perceptron using Tensor-Train. A comparative analysis is also carried out in the context of prediction of the Mackey-Glass noisy chaotic time series and NASDAQ index. We have shown that the weights of a multidimensional regression model can be learned by means of tensor networks with the aim of performing a powerful compact representation retaining the accuracy of neural networks. Furthermore, an algorithm based on alternating least squares has been proposed for approximating the weights in TT-format with a reduction of computational calculus. By means of a direct expression, we have approximated the core estimation as the conventional solution for a general regression model, which allows to extend the applicability of tensor structures to different algorithms.
翻译:深心神经网络吸引了机器学习界的注意,因为它们吸引了数据驱动的框架,并且表现了几种模式识别任务。另一方面,在网络内部运行、某些层的必要性、超光度选择等方面,存在着许多开放的理论问题。一个有希望的战略是以在物理和化学应用方面非常成功的高压网络为基础的。一般而言,高阶高压分解成少许相连的低级电压。这是一个数字上可靠的方法,可以避免维度的诅咒,提供高压数据的高度压缩表示,除了能够控制预期的近似准确性的良好数字属性之外。为了比较高压和神经网络的必要性,我们首先考虑利用Tensor-Train 等非常成功的高压网络。在预测Mackey-Glas噪音时间序列和NASDQ指数时,还进行了比较分析。我们已经表明,一个多层次回归模型的重量可以通过高压网络来学习,以最不易应用的数字属性表示,从而能够控制所期望的近似精确度精确度。为了比较高压和神经网络的精确度计算,我们从一个最接近的精确的缩缩缩缩缩缩的计算方法,在了一个平的平的平心压式计算。