Symmetric functions, which take as input an unordered, fixed-size set, are known to be universally representable by neural networks that enforce permutation invariance. These architectures only give guarantees for fixed input sizes, yet in many practical applications, including point clouds and particle physics, a relevant notion of generalization should include varying the input size. In this work we treat symmetric functions (of any size) as functions over probability measures, and study the learning and representation of neural networks defined on measures. By focusing on shallow architectures, we establish approximation and generalization bounds under different choices of regularization (such as RKHS and variation norms), that capture a hierarchy of functional spaces with increasing degree of non-linear learning. The resulting models can be learned efficiently and enjoy generalization guarantees that extend across input sizes, as we verify empirically.
翻译:在这项工作中,我们把(任何大小的)对称功能视为比概率尺度的功能,并研究根据措施界定的神经网络的学习和代表性。我们注重浅层结构,在不同的正规化选择(如RKHS和变异规范)下建立近似和一般化界限,在不同的正规化选择(如RKHS和变异规范)下,在获取功能空间的等级和越来越多的非线性学习程度上,形成的模式可以高效学习,并享有跨越投入大小的通用保障,正如我们通过经验验证的那样。