Larger and deeper networks generalise well despite their increased capacity to overfit. Understanding why this happens is theoretically and practically important. One approach has been to look at the infinitely wide limits of such networks. However, these cannot fully explain finite networks as they do not learn features and the empirical kernel changes significantly during training in contrast to infinite networks. In this work, we derive an iterative linearised training method to investigate this distinction, allowing us to control for sparse (i.e. infrequent) feature updates and quantify the frequency of feature learning needed to achieve comparable performance. We justify iterative linearisation as an interpolation between a finite analog of the infinite width regime, which does not learn features, and standard gradient descent training, which does. We also show that it is analogous to a damped version of the Gauss-Newton algorithm -- a second-order method. We show that in a variety of cases, iterative linearised training performs on par with standard training, noting in particular how much less frequent feature learning is required to achieve comparable performance. We also show that feature learning is essential for good performance. Since such feature learning inevitably causes changes in the NTK kernel, it provides direct negative evidence for the NTK theory, which states the NTK kernel remains constant during training.
翻译:暂无翻译