Symmetries built into a neural network have appeared to be very beneficial for a wide range of tasks as it saves the data to learn them. We depart from the position that when symmetries are not built into a model a priori, it is advantageous for robust networks to learn symmetries directly from the data to fit a task function. In this paper, we present a method to extract symmetries learned by a neural network and to evaluate the degree to which a network is invariant to them. With our method, we are able to explicitly retrieve learned invariances in a form of the generators of corresponding Lie-groups without prior knowledge of symmetries in the data. We use the proposed method to study how symmetrical properties depend on a neural network's parameterization and configuration. We found that the ability of a network to learn symmetries generalizes over a range of architectures. However, the quality of learned symmetries depends on the depth and the number of parameters.
翻译:在神经网络中嵌入的对称性似乎对大量任务非常有益,因为它保存了数据来学习这些数据。 我们脱离了这样一种立场,即当对称性没有被先验地纳入模型时,强势网络能够直接从数据中学习对称性以适应任务功能。 在本文中,我们提出了一个方法来提取神经网络所学的对称性,并评估网络对它们不利的程度。用我们的方法,我们能够明确检索以对应的里组生成的一种形式获得的变异性,而没有事先对数据对称性的了解。我们使用拟议方法来研究对称性特性如何依赖神经网络的参数化和配置。我们发现,网络学习对称性的能力是一系列结构的统称性。但是,所学的对称性的质量取决于深度和参数的数量。