We examine the closedness of sets of realized neural networks of a fixed architecture in Sobolev spaces. For an exactly $m$-times differentiable activation function $\rho$, we construct a sequence of neural networks $(\Phi_n)_{n \in \mathbb{N}}$ whose realizations converge in order-$(m-1)$ Sobolev norm to a function that cannot be realized exactly by a neural network. Thus, sets of realized neural networks are not closed in order-$(m-1)$ Sobolev spaces $W^{m-1,p}$ for $p \in [1,\infty)$. We further show that these sets are not closed in $W^{m,p}$ under slightly stronger conditions on the $m$-th derivative of $\rho$. For a real analytic activation function, we show that sets of realized neural networks are not closed in $W^{k,p}$ for \textit{any} $k \in \mathbb{N}$. The nonclosedness allows for approximation of non-network target functions with unbounded parameter growth. We partially characterize the rate of parameter growth for most activation functions by showing that a specific sequence of realized neural networks can approximate the activation function's derivative with weights increasing inversely proportional to the $L^p$ approximation error. Finally, we present experimental results showing that networks are capable of closely approximating non-network target functions with increasing parameters via training.
翻译:我们检查Sobolev 空间固定建筑的已实现神经网络的封闭性。 对于 $p [1,\\ infty] 美元, 我们建造了一个神经网络的序列 $(\\ phi_n)\ n\\ mathbb{N\\\ $ $ 美元, 其实现以美元( m-1)$==== 美元= 美元= = 美元= 在一个神经网络无法完全实现的函数中。 因此, 已经实现的神经网络的组合不是按 $( m-1) $( sobolev $) 的关闭, 美元= 美元= 1, p} 美元= 美元= 美元= 美元= 美元= 美元= 1, 美元= 美元= 美元= 美元= 美元= 美元= = 美元= 美元= 美元= 美元= 美元= 美元= 美元= 美元= 美元= 美元= 美元=