The study of feature propagation at initialization in neural networks lies at the root of numerous initialization designs. An assumption very commonly made in the field states that the pre-activations are Gaussian. Although this convenient Gaussian hypothesis can be justified when the number of neurons per layer tends to infinity, it is challenged by both theoretical and experimental works for finite-width neural networks. Our major contribution is to construct a family of pairs of activation functions and initialization distributions that ensure that the pre-activations remain Gaussian throughout the network's depth, even in narrow neural networks. In the process, we discover a set of constraints that a neural network should fulfill to ensure Gaussian pre-activations. Additionally, we provide a critical review of the claims of the Edge of Chaos line of works and build an exact Edge of Chaos analysis. We also propose a unified view on pre-activations propagation, encompassing the framework of several well-known initialization procedures. Finally, our work provides a principled framework for answering the much-debated question: is it desirable to initialize the training of a neural network whose pre-activations are ensured to be Gaussian?
翻译:神经网络初始化时的特征传播研究是许多初始化设计的根源。非常常见的实地假设表明,初始化前的活动是Gaussian。虽然当每层神经数量趋向无限时,这一方便的高斯假设是有道理的,但我们对有限神经网络的理论和实验性工程提出了挑战。我们的主要贡献是建立一个由激活功能和初始化分布组合组成的组合,确保激活前的功能和初始化分布在整个网络的深度保持高斯人,甚至在狭窄的神经网络中。在这个过程中,我们发现了一系列神经网络应履行的制约因素,以确保高斯预激活。此外,我们对Chaos工程纵线的主张进行批判性审查,并构建一个精确的查奥斯分析。我们还提出了一套关于激活前传播的统一观点,其中包括几个众所周知的初始化程序框架。最后,我们的工作为回答深层次的神经问题提供了一个原则性框架:其初始化的网络是保证升级前的网络的初始化。