When trying to fit a deep neural network (DNN) to a $G$-invariant target function with $G$ a group, it only makes sense to constrain the DNN to be $G$-invariant as well. However, there can be many different ways to do this, thus raising the problem of ``$G$-invariant neural architecture design'': What is the optimal $G$-invariant architecture for a given problem? Before we can consider the optimization problem itself, we must understand the search space, the architectures in it, and how they relate to one another. In this paper, we take a first step towards this goal; we prove a theorem that gives a classification of all $G$-invariant single-hidden-layer or ``shallow'' neural network ($G$-SNN) architectures with ReLU activation for any finite orthogonal group $G$, and we prove a second theorem that characterizes the inclusion maps or ``network morphisms'' between the architectures that can be leveraged during neural architecture search (NAS). The proof is based on a correspondence of every $G$-SNN to a signed permutation representation of $G$ acting on the hidden neurons; the classification is equivalently given in terms of the first cohomology classes of $G$, thus admitting a topological interpretation. The $G$-SNN architectures corresponding to nontrivial cohomology classes have, to our knowledge, never been explicitly identified in the literature previously. Using a code implementation, we enumerate the $G$-SNN architectures for some example groups $G$ and visualize their structure. Finally, we prove that architectures corresponding to inequivalent cohomology classes coincide in function space only when their weight matrices are zero, and we discuss the implications of this for NAS.
翻译:当试图将深层神经网络(DNN) 与一个 G$- 变量目标函数($G$ ) 匹配成一个 G$- 变量一个组时, 将DNN 限制为 G$ 也是 G$- 差异性神经结构设计 ” 时, 可能会有多种不同的方法这样做, 从而提出“ G$- 差异性神经结构设计 ” 的问题: 当我们考虑优化问题本身时, 我们必须理解搜索空间、 其中的架构以及它们彼此的关系。 在本文中, 我们迈出了实现这一目标的第一步; 我们证明了一个能给GNNNNNNNG 美元($ G$- 差异性神经结构 ) 分类所有$($G$ G$- 差异性神经性结构 $- SNNN) 的“ $ G$ G$- G$- 差异性结构 ” 最佳激活的问题。 当我们考虑优化问题本身的问题之前, 我们必须理解它的第二个理论内含着我们输入的地图或网络内型结构的特性, 我们的直观中的第一个证据是用来在内部结构中, 以每部内部结构搜索的数值搜索中 。