Weight initialization plays an important role in training neural networks and also affects tremendous deep learning applications. Various weight initialization strategies have already been developed for different activation functions with different neural networks. These initialization algorithms are based on minimizing the variance of the parameters between layers and might still fail when neural networks are deep, e.g., dying ReLU. To address this challenge, we study neural networks from a nonlinear computation point of view and propose a novel weight initialization strategy that is based on the linear product structure (LPS) of neural networks. The proposed strategy is derived from the polynomial approximation of activation functions by using theories of numerical algebraic geometry to guarantee to find all the local minima. We also provide a theoretical analysis that the LPS initialization has a lower probability of dying ReLU comparing to other existing initialization strategies. Finally, we test the LPS initialization algorithm on both fully connected neural networks and convolutional neural networks to show its feasibility, efficiency, and robustness on public datasets.
翻译:为了应对这一挑战,我们从非线性计算点对神经网络进行神经网络研究,并根据神经网络的线性产品结构(LPS)提出新的重量初始化战略。拟议战略来自利用数字代数几何学理论对激活功能进行的多边近似,以保证找到所有本地微型。我们还提供理论分析,认为LPS初始化与其他现有初始化战略相比,死亡概率较低。最后,我们用完全相连的神经网络和革命性神经网络测试LPS初始化算法,以显示其可行性、效率和对公共数据集的坚固性。