We consider the optimisation of large and shallow neural networks via gradient flow, where the output of each hidden node is scaled by some positive parameter. We focus on the case where the node scalings are non-identical, differing from the classical Neural Tangent Kernel (NTK) parameterisation. We prove that, for large neural networks, with high probability, gradient flow converges to a global minimum AND can learn features, unlike in the NTK regime. We also provide experiments on synthetic and real-world datasets illustrating our theoretical results and showing the benefit of such scaling in terms of pruning and transfer learning.
翻译:我们考虑通过梯度流优化大型和浅层神经网络,每个隐藏节点的输出以某些正参数缩放。 我们侧重于节点缩放与古典神经唐氏内核(NTK)参数化不完全相同的情况。 我们证明,对于大型神经网络来说,高概率的梯度流会达到全球最低水平,并且可以学习特征,这与NTK制度不同。 我们还提供合成和真实世界数据集的实验,以说明我们的理论结果,并展示这种缩放在修剪和转移学习方面的好处。