We establish a direct connection between general tensor networks and deep feed-forward artificial neural networks. The core of our results is the construction of neural-network layers that efficiently perform tensor contractions, and that use commonly adopted non-linear activation functions. The resulting deep networks feature a number of edges that closely matches the contraction complexity of the tensor networks to be approximated. In the context of many-body quantum states, this result establishes that neural-network states have strictly the same or higher expressive power than practically usable variational tensor networks. As an example, we show that all matrix product states can be efficiently written as neural-network states with a number of edges polynomial in the bond dimension and depth logarithmic in the system size. The opposite instead does not hold true, and our results imply that there exist quantum states that are not efficiently expressible in terms of matrix product states or practically usable PEPS, but that are instead efficiently expressible with neural network states.
翻译:我们建立了普通高压网络和深饲料向向外的人工神经网络之间的直接连接。 我们的结果的核心是建造神经网络层,这些神经网络层能高效地运行高压收缩,并且使用通常采用的非线性激活功能。 由此形成的深网络有一系列边缘,这些边缘与要接近的高压网络的收缩复杂性密切相关。 在许多体量状态中,这一结果证明神经网络国的表达力与实际可用的变压网络严格相同或更高。 例如,我们显示所有矩阵产品国都可以高效地写成神经网络状态,在债券维度和系统尺寸的深度对数上有许多边缘。相反的情况并不真实,我们的结果表明,在矩阵产品状态或可实际使用的PEPS中,存在着无法有效表达的量值状态,但在神经网络状态下却能高效地表达。