Recently mean field theory has been successfully used to analyze properties of wide, random neural networks. It gave rise to a prescriptive theory for initializing feed-forward neural networks with orthogonal weights, which ensures that both the forward propagated activations and the backpropagated gradients are near $\ell_2$ isometries and as a consequence training is orders of magnitude faster. Despite strong empirical performance, the mechanisms by which critical initializations confer an advantage in the optimization of deep neural networks are poorly understood. Here we show a novel connection between the maximum curvature of the optimization landscape (gradient smoothness) as measured by the Fisher information matrix (FIM) and the spectral radius of the input-output Jacobian, which partially explains why more isometric networks can train much faster. Furthermore, given that orthogonal weights are necessary to ensure that gradient norms are approximately preserved at initialization, we experimentally investigate the benefits of maintaining orthogonality throughout training, from which we conclude that manifold optimization of weights performs well regardless of the smoothness of the gradients. Moreover, motivated by experimental results we show that a low condition number of the FIM is not predictive of faster learning.
翻译:最近,我们成功地利用实地理论来分析广度、随机神经网络的特性。它引出了一种规范理论,用于启动具有正向重量的进料向神经网络,确保前向传播的激活和后向传播的梯度都接近$@ell_2$2美元等离子体,因此,培训规模要快得多。尽管经验表现很强,但关键初始化在优化深层神经网络方面的好处却不甚为人知。在这里,我们显示了由渔业信息矩阵(FIM)测量的优化景观的最大曲线(平稳度)与输入-输出 Jacobian 的光谱半径之间的新联系,这在一定程度上解释了为什么更多的异度网络可以更快地培训。此外,鉴于有必要进行精确度加权,以确保在初始化时能大致保持梯度规范,我们实验性地调查了在整个培训过程中保持或多位化的好处,我们从中得出结论,无论梯度的平滑度如何,多重重量优化都会很好地表现。此外,由于实验性结果,我们没有更快地显示FIM的预测结果显示,低度状况。