Symmetry is a fundamental tool in the exploration of a broad range of complex systems. In machine learning symmetry has been explored in both models and data. In this paper we seek to connect the symmetries arising from the architecture of a family of models with the symmetries of that family's internal representation of data. We do this by calculating a set of fundamental symmetry groups, which we call the intertwiner groups of the model. We connect intertwiner groups to a model's internal representations of data through a range of experiments that probe similarities between hidden states across models with the same architecture. Our work suggests that the symmetries of a network are propagated into the symmetries in that network's representation of data, providing us with a better understanding of how architecture affects the learning and prediction process. Finally, we speculate that for ReLU networks, the intertwiner groups may provide a justification for the common practice of concentrating model interpretability exploration on the activation basis in hidden layers rather than arbitrary linear combinations thereof.
翻译:对称性是探索各种复杂系统的基本工具。在机器学习中,已经研究了模型和数据中的对称性。本文旨在通过计算基本的对称性群(我们称之为模型的调和器群),将家族模型架构的对称性与该家族数据的内部表征的对称性联系起来。我们通过一系列实验,分析了具有相同架构的模型之间的隐藏状态相似性,从而将调和器群与模型对数据的内部表示相连接。我们的工作表明,网络的对称性会传递到该网络对数据的表征中,这有助于我们更好地理解架构如何影响学习和预测过程。最后,我们推测,对于ReLU网络,调和器群可以证明将模型解释性探索集中在隐藏层的激活基础上而不是其中任意线性组合的常见做法。