This paper discusses the estimation of the generalization gap, the difference between generalization performance and training performance, for overparameterized models including neural networks. We first show that a functional variance, a key concept in defining a widely-applicable information criterion, characterizes the generalization gap even in overparameterized settings where a conventional theory cannot be applied. As the computational cost of the functional variance is expensive for the overparameterized models, we propose an efficient approximation of the function variance, the Langevin approximation of the functional variance (Langevin FV). This method leverages only the $1$st-order gradient of the squared loss function, without referencing the $2$nd-order gradient; this ensures that the computation is efficient and the implementation is consistent with gradient-based optimization algorithms. We demonstrate the Langevin FV numerically by estimating the generalization gaps of overparameterized linear regression and non-linear neural network models, containing more than a thousand of parameters therein.
翻译:----
通过 Langevin 泛函方差估计过度参数化模型的泛化差距
Translated abstract:
本文讨论了过度参数化模型(包括神经网络)的泛化差距估计,即泛化性能和训练性能之间的差异。我们首先证明了泛函方差(一种定义广泛的信息准则中的关键概念)即使在传统理论无法应用的过度参数化设置中,也可以刻画泛化差距。由于计算泛函方差的计算成本在过度参数化模型中非常昂贵,我们提出了一种高效的近似方法,即泛函方差的 Langevin 近似(Langevin FV)。该方法只利用了平方损失函数的一阶梯度,而没有引用二阶梯度;这确保了计算效率,并且与基于梯度的优化算法的实现一致。我们通过估计包含数千个参数的过度参数化线性回归和非线性神经网络模型的泛化差距来定量展示了 Langevin FV 的数值效果。