A recurrent neural network (RNN) is a widely used deep-learning network for dealing with sequential data. Imitating a dynamical system, an infinite-width RNN can approximate any open dynamical system in a compact domain. In general, deep networks with bounded widths are more effective than wide networks in practice; however, the universal approximation theorem for deep narrow structures has yet to be extensively studied. In this study, we prove the universality of deep narrow RNNs and show that the upper bound of the minimum width for universality can be independent of the length of the data. Specifically, we show that a deep RNN with ReLU activation can approximate any continuous function or $L^p$ function with the widths $d_x+d_y+2$ and $\max\{d_x+1,d_y\}$, respectively, where the target function maps a finite sequence of vectors in $\mathbb{R}^{d_x}$ to a finite sequence of vectors in $\mathbb{R}^{d_y}$. We also compute the additional width required if the activation function is $\tanh$ or more. In addition, we prove the universality of other recurrent networks, such as bidirectional RNNs. Bridging a multi-layer perceptron and an RNN, our theory and proof technique can be an initial step toward further research on deep RNNs.
翻译:循环神经网络(RNN)是一种处理序列数据的广泛应用的深度学习网络。仿效动力系统,无限宽的RNN可以在一个紧凑的域内逼近任何开放的动力系统。一般来说,宽度有限的深度网络在实践中比宽网络更有效;然而,深度狭窄结构的通用逼近定理尚未得到广泛研究。在本研究中,我们证明了深度狭窄RNN的通用性,并表明最小通用宽度的上限可以独立于数据长度。具体而言,我们证明,具有ReLU激活的深度RNN可以逼近任何连续函数或$L^p$函数,其宽度分别为$d_x+d_y+2$和$\max\{d_x+1,d_y\}$,其中目标函数将$\mathbb{R}^{d_x}$中的有限向量序列映射到$\mathbb{R}^{d_y}$中的有限向量序列。我们还计算了如果激活函数是$\tanh$或更高阶,则需要的额外宽度。此外,我们还证明了其他循环网络的通用性,例如双向RNN。承接多层感知机和RNN,我们的理论与证明技巧可能是进一步研究深度RNN的初始步骤。