Classical results in neural network approximation theory show how arbitrary continuous functions can be approximated by networks with a single hidden layer, under mild assumptions on the activation function. However, the classical theory does not give a constructive means to generate the network parameters that achieve a desired accuracy. Recent results have demonstrated that for specialized activation functions, such as ReLUs and some classes of analytic functions, high accuracy can be achieved via linear combinations of randomly initialized activations. These recent works utilize specialized integral representations of target functions that depend on the specific activation functions used. This paper defines mollified integral representations, which provide a means to form integral representations of target functions using activations for which no direct integral representation is currently known. The new construction enables approximation guarantees for randomly initialized networks for a variety of widely used activation functions.
翻译:神经网络逼近理论的经典结果表明,在激活函数的一些温和假设下,单隐藏层网络可以逼近任意连续函数。然而,经典理论没有给出生成网络参数以达到所需精度的具体方法。最近的研究表明,对于一些特定的激活函数(如ReLU和某些类别的解析函数),可以通过随机初始化的激活的线性组合实现高精度。这些最新的工作利用了特定激活函数下的专门积分表示目标函数的方法。本文定义了缓和积分表示方法,提供了一种使用现有的没有直接积分表示的激活函数来形成目标函数积分表示的方法。新的构造方法可以为随机初始化网络提供各种广泛使用的激活函数的逼近保证。