We study two-layer neural networks whose domain and range are Banach spaces with separable preduals. In addition, we assume that the image space is equipped with a partial order, i.e. it is a Riesz space. As the nonlinearity we choose the lattice operation of taking the positive part; in case of $\mathbb R^d$-valued neural networks this corresponds to the ReLU activation function. We prove inverse and direct approximation theorems with Monte-Carlo rates for a certain class of functions, extending existing results for the finite-dimensional case. In the second part of the paper, we study, from the regularisation theory viewpoint, the problem of finding optimal representations of such functions via signed measures on a latent space from a finite number of noisy observations. We discuss regularity conditions known as source conditions and obtain convergence rates in a Bregman distance for the representing measure in the regime when both the noise level goes to zero and the number of samples goes to infinity at appropriate rates.
翻译:我们研究的是两层神经网络,其域和范围是Banach空间,具有分解的前两个部分。此外,我们假设图像空间配有部分顺序,即Riesz空间。作为非线性,我们选择了占积极部分的细丝操作;如果是美元和马思布R ⁇ d$价值的神经网络,则与RELU的激活功能相对应。我们证明,对于某类功能,我们反向和直接接近蒙泰-卡洛的理论,扩展了有限维体案例的现有结果。在论文的第二部分,我们从常规化理论的角度,研究通过在一定数量的噪音观测中,在隐蔽空间上采取签字措施,找到这些功能的最佳表现的问题。我们讨论了被称为源条件的常规性条件,并在Bregman距离的距离内,当噪音水平达到零和样品数量以适当速率到达定点时,在制度内代表该测量的尺度的趋同率。