In recent years, state-of-the-art methods in computer vision have utilized increasingly deep convolutional neural network architectures (CNNs), with some of the most successful models employing hundreds or even thousands of layers. A variety of pathologies such as vanishing/exploding gradients make training such deep networks challenging. While residual connections and batch normalization do enable training at these depths, it has remained unclear whether such specialized architecture designs are truly necessary to train deep CNNs. In this work, we demonstrate that it is possible to train vanilla CNNs with ten thousand layers or more simply by using an appropriate initialization scheme. We derive this initialization scheme theoretically by developing a mean field theory for signal propagation and by characterizing the conditions for dynamical isometry, the equilibration of singular values of the input-output Jacobian matrix. These conditions require that the convolution operator be an orthogonal transformation in the sense that it is norm-preserving. We present an algorithm for generating such random initial orthogonal convolution kernels and demonstrate empirically that they enable efficient training of extremely deep architectures.
翻译:近年来,计算机视觉中最先进的方法利用了越来越深的进化神经网络结构(CNNs),其中一些最成功的模型使用数百甚至数千层。各种病理学,如消失/爆炸梯度等,使得这种深层网络的训练具有挑战性。虽然剩余连接和批次正常化确实使这些深度的培训成为可能,但仍然不清楚这种专业结构设计是否真正必要来培训深重CNN。在这项工作中,我们证明仅仅利用适当的初始化计划就可以用一万层或更多层来培训香草CNNs。我们从理论上讲,通过开发一种信号传播的中下流理论,并通过对动态异度测量条件进行定性,对投入-输出的Jacobian矩阵的单值进行均衡,这些条件要求演算操作者是常规意义上的骨质转变。我们为产生这种随机的初始或超深层进化内核内核提供了一种算法,并从经验上表明它们能够使极深的结构得到有效培训。