Recurrent Neural Network (RNN) is a fundamental structure in deep learning. Recently, some works study the training process of over-parameterized neural networks, and show that over-parameterized networks can learn functions in some notable concept classes with a provable generalization error bound. In this paper, we analyze the training and generalization for RNNs with random initialization, and provide the following improvements over recent works: 1) For a RNN with input sequence $x=(X_1,X_2,...,X_L)$, previous works study to learn functions that are summation of $f(\beta^T_lX_l)$ and require normalized conditions that $||X_l||\leq\epsilon$ with some very small $\epsilon$ depending on the complexity of $f$. In this paper, using detailed analysis about the neural tangent kernel matrix, we prove a generalization error bound to learn such functions without normalized conditions and show that some notable concept classes are learnable with the numbers of iterations and samples scaling almost-polynomially in the input length $L$. 2) Moreover, we prove a novel result to learn N-variables functions of input sequence with the form $f(\beta^T[X_{l_1},...,X_{l_N}])$, which do not belong to the ``additive'' concept class, i,e., the summation of function $f(X_l)$. And we show that when either $N$ or $l_0=\max(l_1,..,l_N)-\min(l_1,..,l_N)$ is small, $f(\beta^T[X_{l_1},...,X_{l_N}])$ will be learnable with the number iterations and samples scaling almost-polynomially in the input length $L$.
翻译:经常性神经网络 (RNN) 是深层学习的基本结构 。 最近, 某些工作研究过量参数化神经网络的培训过程, 并显示超度化网络可以在某些值得注意的概念类中学习函数, 并附带可验证的通用错误 。 在本文中, 我们用随机初始化来分析 RNS 的培训和一般化, 并提供与最近作品相比的以下改进 :1 具有输入序列 $= (X_ 1, X_ 2,..., X_L) 的 RNN 。 先前的工作研究是为了学习 与 $ (\ b) 美元 相比的 培训过程。 超度化的网络可以在某些值得注意的概念类中学习 $ (N_ 美元) 美元(lX), 并且需要 $_ 美元(lQ) 概念化的常规化条件, 取决于 $1 的复杂程度 。 在本文中, 使用对神经红心质矩阵矩阵矩阵矩阵的详尽分析, 我们证明一个总性错误注定要学习这样的函数,, 并且显示一些值得注意的概念类课程可以学习, 和样本化的 $_ 美元 。 (xxxxxx 级 的 。 。 学习 的 的 。