We study the natural function space for infinitely wide two-layer neural networks with ReLU activation (Barron space) and establish different representation formulae. In two cases, we describe the space explicitly up to isomorphism. Using a convenient representation, we study the pointwise properties of two-layer networks and show that functions whose singular set is fractal or curved (for example distance functions from smooth submanifolds) cannot be represented by infinitely wide two-layer networks with finite path-norm. We use this structure theorem to show that the only $C^1$-diffeomorphisms which Barron space are affine. Furthermore, we show that every Barron function can be decomposed as the sum of a bounded and a positively one-homogeneous function and that there exist Barron functions which decay rapidly at infinity and are globally Lebesgue-integrable. This result suggests that two-layer neural networks may be able to approximate a greater variety of functions than commonly believed.
翻译:我们用 ReLU 激活( 巴伦空间) 来研究无限宽的两层神经网络的自然功能空间, 并且建立不同的表达式。 在两种情况下, 我们描述空间的清晰度, 直截了当地表达。 我们使用方便的表达方式, 研究两层网络的点度特性, 并显示其单数设置为分形或曲线的函数( 例如, 平滑的子文件夹的距离函数) 不能由无限宽的双层网络来代表。 我们使用这个结构的理论来显示, 巴伦空间仅有的 $C$1$- diffophisticalismismisms, 也就是巴伦 的每个函数都可以被解构成为一个捆绑和积极的一对式函数的总和, 并且存在 Barron 函数, 这些函数在无限时间迅速腐蚀, 并且在全球范围上是不可分离的。 这个结果表明, 两层神经网络可能比通常认为的功能种类要大。