Stacking many layers to create truly deep neural networks is arguably what has led to the recent explosion of these methods. However, many properties of deep neural networks are not yet understood. One such mystery is the depth degeneracy phenomenon: the deeper you make your network, the closer your network is to a constant function on initialization. In this paper, we examine the evolution of the angle between two inputs to a ReLU neural network as a function of the number of layers. By using combinatorial expansions, we find precise formulas for how fast this angle goes to zero as depth increases. Our formulas capture microscopic fluctuations that are not visible in the popular framework of infinite width limits, and yet have a significant effect on predicted behaviour. The formulas are given in terms of the mixed moments of correlated Gaussians passed through the ReLU function. We also find a surprising combinatorial connection between these mixed moments and the Bessel numbers.
翻译:建立真正深层神经网络的多层堆积是最近这些方法爆炸的原因。 但是,深神经网络的许多特性还没有得到理解。 其中的一个神秘现象是深度变异现象:你网络越深,网络就越接近于初始化时的常态功能。 在本文中,我们根据层数的函数来考察两个输入到ReLU神经网络的角度的演变。 通过使用组合扩展,我们找到精确的公式来计算这种角度在深度增加时达到零的速度。 我们的公式捕捉出在宽度无限广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广广而无法看得见的微波动,但对预测行为却影响也很大。 公式是用通过 ReLU 函数传递的相交汇高斯人偶的时段来给出的公式。 我们还发现这些混合时段与贝塞尔数字之间有关系。