The tensor network, as a facterization of tensors, aims at performing the operations that are common for normal tensors, such as addition, contraction and stacking. However, due to its non-unique network structure, only the tensor network contraction is so far well defined. In this paper, we propose a mathematically rigorous definition for the tensor network stack approach, that compress a large amount of tensor networks into a single one without changing their structures and configurations. We illustrate the main ideas with the matrix product states based machine learning as an example. Our results are compared with the for loop and the efficient coding method on both CPU and GPU.
翻译:高压网络,作为高压阵列的推广,旨在进行正常的高压阵列常见的操作,例如添加、收缩和堆叠。然而,由于其非独一无二的网络结构,目前只对高压网络收缩作了明确界定。在本文件中,我们提议对高压网络堆放法作出数学上严格的定义,将大量高压网络压缩成一个单一的网络,而不改变其结构和配置。我们用矩阵产品国的机器学习为例,以矩阵产品国为例,说明主要的想法。我们的结果与CPU和GPU的循环和有效编码方法进行了比较。