The minimization of a data-fidelity term and an additive regularization functional gives rise to a powerful framework for supervised learning. In this paper, we present a unifying regularization functional that depends on an operator and on a generic Radon-domain norm. We establish the existence of a minimizer and give the parametric form of the solution(s) under very mild assumptions. When the norm is Hilbertian, the proposed formulation yields a solution that involves radial-basis functions and is compatible with the classical methods of machine learning. By contrast, for the total-variation norm, the solution takes the form of a two-layer neural network with an activation function that is determined by the regularization operator. In particular, we retrieve the popular ReLU networks by letting the operator be the Laplacian. We also characterize the solution for the intermediate regularization norms $\|\cdot\|=\|\cdot\|_{L_p}$ with $p\in(1,2]$. Our framework offers guarantees of universal approximation for a broad family of regularization operators or, equivalently, for a wide variety of shallow neural networks, including the cases (such as ReLU) where the activation function is increasing polynomially. It also explains the favorable role of bias and skip connections in neural architectures.
翻译:最小化的数据-纤维化术语和添加式规范化功能产生了一个强大的监督学习框架。在本文中,我们提出了一个统一的规范化功能,取决于操作员和普通的雷达-域域规范。我们在非常温和的假设下确立了最小化器的存在,并给出了解决方案的参数形式。当标准为Hilbertian时,拟议配方产生一个包含半衰期功能并与经典机器学习方法兼容的解决方案。相比之下,对于总体变换规范而言,解决方案采取的形式是一个双层神经网络,其启动功能由规范操作员决定。特别是,我们通过让操作员成为拉普拉西安人来检索流行的ReLU网络。我们还用$cdot ⁇ cdot ⁇ L_p} 来描述中间规范的解决方案。我们的框架为常规化操作员的广泛组合提供了普遍近似性保障,或者类似地为一系列浅色的神经网络提供了一种形式,包括由常规操作员决定的激活功能(比如RELU),同时解释启动功能的核心和核心结构。