Characterizing the function spaces corresponding to neural networks can provide a way to understand their properties. In this paper we discuss how the theory of reproducing kernel Banach spaces can be used to tackle this challenge. In particular, we prove a representer theorem for a wide class of reproducing kernel Banach spaces that admit a suitable integral representation and include one hidden layer neural networks of possibly infinite width. Further, we show that, for a suitable class of ReLU activation functions, the norm in the corresponding reproducing kernel Banach space can be characterized in terms of the inverse Radon transform of a bounded real measure, with norm given by the total variation norm of the measure. Our analysis simplifies and extends recent results in [34,29,30].
翻译:描述神经网络相应的功能空间可以提供理解其属性的方法。 在本文中, 我们讨论如何利用复制内核Banach空间的理论来应对这一挑战。 特别是, 我们证明我们是一个代表着一个广泛的复制内核Banach空间的理论, 这些空间允许一个合适的整体代表, 并包含一个可能无限宽的隐藏层神经网络。 此外, 我们显示, 对于一个合适的RELU激活功能类别, 相应的再生产内核Banach空间的规范可以用一个约束性实际措施的反拉登转换为特征, 标准由该措施的完全变异规范给出。 我们的分析简化并扩展了 [34, 29, 30] 的最新结果。