We prove that two-layer (Leaky)ReLU networks initialized by e.g. the widely used method proposed by He et al. (2015) and trained using gradient descent on a least-squares loss are not universally consistent. Specifically, we describe a large class of one-dimensional data-generating distributions for which, with high probability, gradient descent only finds a bad local minimum of the optimization landscape, since it is unable to move the biases far away from their initialization at zero. It turns out that in these cases, the found network essentially performs linear regression even if the target function is non-linear. We further provide numerical evidence that this happens in practical situations, for some multi-dimensional distributions and that stochastic gradient descent exhibits similar behavior. We also provide empirical results on how the choice of initialization and optimizer can influence this behavior.
翻译:我们证明,由He等人(2015年)提出并经过培训在最小方位损失上使用梯度下降的常用方法启动的双层(Leaky)ReLU网络并非普遍一致。 具体地说,我们描述了一维数据生成分布的一大类,其高概率梯度下降仅发现局部优化面貌的最低差点,因为它无法将偏差移到零点。 事实证明,在这些情况下,发现网络基本上会进行线性回归,即使目标函数是非线性。 我们还提供了数字证据,证明这种情况在实际情况下发生,用于某些多维分布,而随机梯度梯度下降也显示出类似的行为。 我们还提供了初步化和优化选择如何影响该行为的经验性结果。