Activation functions are essential for neural networks to introduce non-linearity. A great number of empirical experiments have validated various activation functions, yet theoretical research on activation functions are insufficient. In this work, we study the impact of activation functions on the variance of gradients and propose an approach to normalize activation functions to keep the variance of the gradient same for all layers so that the neural network can achieve better convergence. First, we complement the previous work on the analysis of the variance of gradients where the impact of activation functions are just considered in an idealized initial state which almost cannot be preserved during training and obtained a property that good activation functions should satisfy as possible. Second, we offer an approach to normalize activation functions and testify its effectiveness on prevalent activation functions empirically. And by observing experiments, we discover that the speed of convergence is roughly related to the property we derived in the former part. We run experiments of our normalized activation functions against common activation functions. And the result shows our approach consistently outperforms their unnormalized counterparts. For example, normalized Swish outperforms vanilla Swish by 1.2% on ResNet50 with CIFAR-100 in terms of top-1 accuracy. Our method improves the performance by simply replacing activation functions with their normalized ones in both fully-connected networks and residual networks.
翻译:激活功能是神经网络引入非线性的关键。 许多实验实验已经验证了各种激活功能,但关于激活功能的理论研究还不够。 在这项工作中,我们研究了激活功能对梯度差异的影响,并提出了使启动功能正常化的方法,以使所有层的梯度差异保持相同,以便神经网络能够实现更好的趋同。首先,我们补充了以前对梯度差异的分析工作,在梯度的差异分析中,激活功能的影响只是在一个理想化的初始状态中加以考虑,在培训期间几乎无法保存,并获得了一个良好的激活功能应当尽可能满足的属性。第二,我们提出了一个使启动功能正常化的方法,并用经验证明它对于流行的激活功能的有效性。通过观察实验,我们发现趋同速度与我们在前部分获得的属性大致相关。我们对照共同激活功能对正常的启动功能进行实验。结果显示,我们的方法始终超越了不正规化的对应方。例如,在ResNet50上将1.2%的Vanilla Swishfirmafla, 和ResNet50 ISO-100 的常规化的网络,以最精确性化的方式取代了CIRM- 1的网络。我们的方法以完全的升级的方式改进了它们。