Symmetry is a fundamental tool in the exploration of a broad range of complex systems. In machine learning symmetry has been explored in both models and data. In this paper we seek to connect the symmetries arising from the architecture of a family of models with the symmetries of that family's internal representation of data. We do this by calculating a set of fundamental symmetry groups, which we call the intertwiner groups of the model. We connect intertwiner groups to a model's internal representations of data through a range of experiments that probe similarities between hidden states across models with the same architecture. Our work suggests that the symmetries of a network are propagated into the symmetries in that network's representation of data, providing us with a better understanding of how architecture affects the learning and prediction process. Finally, we speculate that for ReLU networks, the intertwiner groups may provide a justification for the common practice of concentrating model interpretability exploration on the activation basis in hidden layers rather than arbitrary linear combinations thereof.
翻译:对称是探索广泛复杂系统的基本工具。 在机器学习对称中, 模型和数据都对称进行了探索。 在本文中, 我们试图将模型大家庭结构产生的对称与该家庭内部数据表述的对称联系起来。 我们通过计算一套基本对称组来做到这一点, 我们称之为模型的相互交错组。 我们通过一系列实验将相互交错的组与模型的内部数据表述联系起来, 通过这些实验可以探测不同模型和同一结构之间的相似之处。 我们的工作表明, 网络的对称在网络数据表述中的对称中传播, 使我们更好地了解结构如何影响学习和预测过程。 最后, 我们推测, 对于RELU 网络, 相互交错组可能为在隐藏层而不是任意线性组合中将模型解释性探索集中在启动基础上的常见做法提供理由。