Over-parametrization has become a popular technique in deep learning. It is observed that by over-parametrization, a larger neural network needs a fewer training iterations than a smaller one to achieve a certain level of performance -- namely, over-parametrization leads to acceleration in optimization. However, despite that over-parametrization is widely used nowadays, little theory is available to explain the acceleration due to over-parametrization. In this paper, we propose understanding it by studying a simple problem first. Specifically, we consider the setting that there is a single teacher neuron with quadratic activation, where over-parametrization is realized by having multiple student neurons learn the data generated from the teacher neuron. We provably show that over-parametrization helps the iterate generated by gradient descent to enter the neighborhood of a global optimal solution that achieves zero testing error faster. On the other hand, we also point out an issue regarding the necessity of over-parametrization and study how the scaling of the output neurons affects the convergence time.
翻译:超平衡化已成为深层学习中流行的一种技术。人们注意到,通过超平衡化,一个更大的神经网络需要比较小的神经网络少一个培训迭代,以达到一定的性能水平,即超平衡化导致优化加速。然而,尽管目前广泛使用超平衡化,但很少有理论可以解释过度平衡造成的加速。在本文中,我们提议首先研究一个简单的问题来理解它。具体地说,我们考虑这样一种环境,即存在一个单一的教师神经神经元,带有二次平衡激活,通过让多个学生神经元学习从教师神经元中生成的数据来实现过度平衡化。我们可以肯定地表明,超平衡化有助于梯度下降产生的循环进入一个全球最佳解决方案的邻近地区,从而实现零测试误差。另一方面,我们还指出了一个关于过度平衡的必要性的问题,并研究输出神经元的扩大如何影响趋同时间。