Understanding implicit bias of gradient descent has been an important goal in machine learning research. Unfortunately, even for a single-neuron ReLU network, it recently proved impossible to characterize the implicit regularization with the square loss by an explicit function of the norm of model parameters. In order to close the gap between the existing theory and the intriguing empirical behavior of ReLU networks, here we examine the gradient flow dynamics in the parameter space when training single-neuron ReLU networks. Specifically, we discover implicit bias in terms of support vectors in ReLU networks, which play a key role in why and how ReLU networks generalize well. Moreover, we analyze gradient flows with respect to the magnitude of the norm of initialization, and show the impact of the norm in gradient dynamics. Lastly, under some conditions, we prove that the norm of the learned weight strictly increases on the gradient flow.
翻译:理解梯度下降的隐含偏差是机器学习研究的一个重要目标。 不幸的是,即使是单一中子ReLU网络,最近也证明不可能通过模型参数规范的明确功能将隐含的正规化与平方损失定性为典型参数规范。为了缩小现有理论与RELU网络令人感兴趣的实验行为之间的差距,我们在这里在培训单中子ReLU网络时考察参数空间中的梯度流动动态。具体地说,我们发现在RELU网络中支持矢量方面存在隐含的偏差,这些矢量在为什么和如何使RELU网络普遍化方面起着关键作用。此外,我们分析了初始化规范的梯度流动程度,并展示了该规范在梯度动态中的影响。最后,在某些条件下,我们证明学到的重量规范严格地提高了梯度流动。