This work explores the hypothesis that the complexity of the function a deep neural network (NN) is learning can be deduced by how fast its weights change during training. Our analysis provides evidence for this supposition by relating the network's distribution of Lipschitz constants (i.e., the norm of the gradient at different regions of the input space) during different training intervals with the behavior of the stochastic training procedure. We first observe that the average Lipschitz constant close to the training data affects various aspects of the parameter trajectory, with more complex networks having a longer trajectory, bigger variance, and often veering further from their initialization. We then show that NNs whose biases are trained more steadily have bounded complexity even in regions of the input space that are far from any training point. Finally, we find that steady training with Dropout implies a training- and data-dependent generalization bound that grows poly-logarithmically with the number of parameters. Overall, our results support the hypothesis that good training behavior can be a useful bias towards good generalization.
翻译:这项工作探索了一个假设,即深神经网络(NN)的功能的复杂性可以从其在培训过程中的重量变化速度来推断。我们的分析通过将网络在不同培训间隔期间的Lipschitz常数分布(即输入空间不同区域梯度的规范)与随机培训程序的行为联系起来,为这一假设提供了证据。我们首先观察到,平均Lipschitz与培训数据接近的常数会影响参数轨迹的各个方面,而更复杂的网络的轨迹较长,差异更大,而且往往从初始化开始就更远。我们随后表明,即使在远离任何培训点的输入空间区域,其偏向性更稳定的偏向性也与复杂性相交织。最后,我们发现,与辍学有关的持续培训意味着一个培训和数据依赖的概括性圈子,随着参数数量的增多而增长。总体而言,我们的结果支持这样的假设,即良好的培训行为可能是一种有用的偏向良好概括化的偏向。