This article considers fully connected neural networks with Gaussian random weights and biases as well as $L$ hidden layers, each of width proportional to a large parameter $n$. For polynomially bounded non-linearities we give sharp estimates in powers of $1/n$ for the joint cumulants of the network output and its derivatives. Moreover, we show that network cumulants form a perturbatively solvable hierarchy in powers of $1/n$ in that $k$-th order cumulants in one layer have recursions that depend to leading order in $1/n$ only on $j$-th order cumulants at the previous layer with $j\leq k$. By solving a variety of such recursions, however, we find that the depth-to-width ratio $L/n$ plays the role of an effective network depth, controlling both the scale of fluctuations at individual neurons and the size of inter-neuron correlations. Thus, while the cumulant recursions we derive form a hierarchy in powers of $1/n$, contributions of order $1/n^k$ often grow like $L^k$ and are hence non-negligible at positive $L/n$. We use this to study a somewhat simplified version of the exploding and vanishing gradient problem, proving that this particular variant occurs if and only if $L/n$ is large. Several key ideas in this article were first developed at a physics level of rigor in a recent monograph of Daniel A. Roberts, Sho Yaida, and the author. This article not only makes these ideas mathematically precise but also significantly extends them, opening the way to obtaining corrections to all orders in $1/n$.
翻译:此外,我们发现网络积聚体形成一个1美元/n的触目惊心的等级结构,它控制着单个神经元的波动规模和内分层的大小。因此,虽然一个层的积聚体的反复出现,它只能以1美元/美元为主序,只有1美元/美元为主序,每个宽度与一个大的参数成比例。对于与多元非线性连接的非线性线性模型,我们给出了1美元/美元强度的精确估计值。对于网络产出及其衍生物的联合累积体,我们发现,通过解决各种重现力为1美元/美元/美元。此外,网络积聚体形成一个1美元/美元(k)的分级结构,而一个1美元/美元-美元(n)的直线性序值仅以1美元/美元为主序。