In certain situations, neural networks are trained upon data that obey underlying symmetries. However, the predictions do not respect the symmetries exactly unless embedded in the network structure. In this work, we introduce architectures that embed a special kind of symmetry namely, invariance with respect to involutory linear/affine transformations up to parity $p=\pm 1$. We provide rigorous theorems to show that the proposed network ensures such an invariance and present qualitative arguments for a special universal approximation theorem. An adaption of our techniques to CNN tasks for datasets with inherent horizontal/vertical reflection symmetry is demonstrated. Extensive experiments indicate that the proposed model outperforms baseline feed-forward and physics-informed neural networks while identically respecting the underlying symmetry.
翻译:在某些情况下,对神经网络进行关于遵守基本对称的数据的培训,然而,预测并不尊重完全的对称,除非嵌入网络结构。在这项工作中,我们引入了包含一种特殊对称结构的结构,即,对不挥发性线性/肾上腺变异,直至等值$p ⁇ pm 1美元。我们提供了严格的理论,以表明拟议的网络确保了这种偏差,并提出了特殊通用近似理论的质量论点。我们的技术与CNN的数据设置技术的调整与固有的横向/纵向反射对称得到了证明。广泛的实验表明,拟议的模型在尊重基本对称的同时,超越了基线向向进和物理知情的神经网络。