In recent years, the use of machine learning has become increasingly popular in the context of lattice field theories. An essential element of such theories is represented by symmetries, whose inclusion in the neural network properties can lead to high reward in terms of performance and generalizability. A fundamental symmetry that usually characterizes physical systems on a lattice with periodic boundary conditions is equivariance under spacetime translations. Here we investigate the advantages of adopting translationally equivariant neural networks in favor of non-equivariant ones. The system we consider is a complex scalar field with quartic interaction on a two-dimensional lattice in the flux representation, on which the networks carry out various regression and classification tasks. Promising equivariant and non-equivariant architectures are identified with a systematic search. We demonstrate that in most of these tasks our best equivariant architectures can perform and generalize significantly better than their non-equivariant counterparts, which applies not only to physical parameters beyond those represented in the training set, but also to different lattice sizes.
翻译:近些年来,机器学习的使用在平坦的实地理论中越来越受欢迎,这种理论的一个基本要素是对称性,这种理论包含在神经网络属性中,在性能和一般性方面可以带来很高的回报。一种基本对称性,通常将物理系统定性为具有定期边界条件的悬浮体,在时空翻译中是不平等的。我们在这里调查采用翻译等同性神经网络以有利于非等异性网络的优势。我们认为,这个系统是一个复杂的对称性领域,在通量代表的二维阵列上具有孔隙性互动,网络在其中执行各种回归和分类任务。一个有系统化的对准性和非等性结构被确定为系统性的。我们证明,在多数这些任务中,我们最好的等异性结构能够运行和普及比非等异性对等结构要好得多,不仅适用于培训中显示的外的物理参数,而且还适用于不同的固定尺寸。