In this article, we study the whole theory of regularized learning for generalized data in Banach spaces including representer theorems, approximation theorems, and convergence theorems. The generalized input data are composed of linear functionals in the predual spaces of the Banach spaces to represent the discrete local information of different engineering and physics models. The generalized data and the multi-loss functions are used to compute the empirical risks, and the regularized learning is to minimize the regularized empirical risks over the Banach spaces. Even if the original problems are unknown or unformulated, then the exact solutions of the original problems are approximated globally by the regularized learning. In the proof of the convergence theorems, the strong convergence condition is replaced to the weak convergence condition with the additional checkable condition which is independent of the original problems. The theorems of the regularized learning can be used to solve many problems of machine learning such as support vector machines and neural networks.
翻译:在文章中,我们研究了班纳奇空间通用数据常规化学习的整个理论,包括代表性理论、近似理论和趋同理论。通用输入数据由班纳奇空间前两空的线性功能组成,以代表不同工程和物理模型的离散本地信息。通用数据和多损函数被用来计算经验风险,而常规学习的目的是将班纳奇空间的常规化经验风险降到最低。即使原始问题未知或未形成,但原始问题的确切解决办法则由常规化学习在全球范围相近。在趋同理论的证明中,强大的趋同条件被替换为弱的趋同条件,附加的可核对条件独立于原始问题。正规化学习的理论可以用来解决机器学习的许多问题,例如支持矢量机器和神经网络。