We present a novel and mathematically transparent approach to function approximation and the training of large, high-dimensional neural networks, based on the approximate least-squares solution of associated Fredholm integral equations of the first kind by Ritz-Galerkin discretization, Tikhonov regularization and tensor-train methods. Practical application to supervised learning problems of regression and classification type confirm that the resulting algorithms are competitive with state-of-the-art neural network-based methods.
翻译:我们提出了一种新颖且数学上透明的方法,基于通过瑞兹-伽辽金离散化,蒂克诺夫正则化和张量列方法近似地最小二乘求解相关Fredholm积分方程,以完成函数逼近和大型高维神经网络的训练。在回归和分类等有监督学习问题的实际应用中,我们证实了所得算法与现代神经网络方法相当竞争力。