We present a novel and mathematically transparent approach to function approximation and the training of large, high-dimensional neural networks, based on the approximate least-squares solution of associated Fredholm integral equations of the first kind by Ritz-Galerkin discretization, Tikhonov regularization and tensor-train methods. Practical application to supervised learning problems of regression and classification type confirm that the resulting algorithms are competitive with state-of-the-art neural network-based methods.
翻译:我们对功能近似和大型、高维神经网络的培训提出了一种新颖和数学上透明的方法,其依据是Ritz-Galerkin离散、Tikhonov正规化和高压培训方法等第一种类型的Fredholm组合方程式的近似最平方解决方案、Tikhonov正规化和高压培训方法。 对受监督的回归和分类类型学习问题的实用应用证实,由此产生的算法与最先进的神经网络方法具有竞争力。</s>