Neural networks that satisfy invariance with respect to input permutations have been widely studied in machine learning literature. However, in many applications, only a subset of all input permutations is of interest. For heterogeneous graph data, one can focus on permutations that preserve node types. We fully characterize linear layers invariant to such permutations. We verify experimentally that implementing these layers in graph neural network architectures allows learning important node interactions more effectively than existing techniques. We show that the dimension of space of these layers is given by a generalization of Bell numbers, extending the work (Maron et al., 2019). We further narrow the invariant network design space by addressing a question about the sizes of tensor layers necessary for function approximation on graph data. Our findings suggest that function approximation on a graph with $n$ nodes can be done with tensors of sizes $\leq n$, which is tighter than the best-known bound $\leq n(n-1)/2$. For $d \times d$ image data with translation symmetry, our methods give a tight upper bound $2d - 1$ (instead of $d^{4}$) on sizes of invariant tensor generators via a surprising connection to Davenport constants.
翻译:在机器学习文献中广泛研究了满足输入变异方面差异的神经网络。然而,在许多应用中,只有所有输入变异的子集才值得注意。对于多元图形数据,我们可以侧重于保存节点类型的变异性。我们完全将线性层定性为这种变异性。我们实验性地核查,在图形神经网络结构中执行这些层可以比现有技术更有效地学习重要的节点互动。我们显示,这些层空间的维度是通过对贝尔数字的概括化来给予的,延长了工作(Maron et al., 2019)。我们进一步缩小了内变异网络设计空间,解决了图形数据函数近似所需的高温层大小问题。我们的调查结果表明,对以美元为节点的图形,可以用比最著名的约束值$leq n(n-1)/2美元更紧的电压。对于带有翻译对称的图像数据(Maron etal etal laxal constreptal $-stalfor connal contal $4] a stoltrastal destal destal destal destaltition a devaltize.</s>