Despite the ubiquitous use of stochastic optimization algorithms in machine learning, the precise impact of these algorithms on generalization performance in realistic non-convex settings is still poorly understood. In this paper, we provide an encompassing theoretical framework for investigating the generalization properties of stochastic optimizers, which is based on their dynamics. We first prove a generalization bound attributable to the optimizer dynamics in terms of the celebrated Fernique-Talagrand functional applied to the trajectory of the optimizer. This data- and algorithm-dependent bound is shown to be the sharpest possible in the absence of further assumptions. We then specialize this result by exploiting the Markovian structure of stochastic optimizers, deriving generalization bounds in terms of the (data-dependent) transition kernels associated with the optimization algorithms. In line with recent work that has revealed connections between generalization and heavy-tailed behavior in stochastic optimization, we link the generalization error to the local tail behavior of the transition kernels. We illustrate that the local power-law exponent of the kernel acts as an effective dimension, which decreases as the transitions become "less Gaussian". We support our theory with empirical results from a variety of neural networks, and we show that both the Fernique-Talagrand functional and the local power-law exponent are predictive of generalization performance.
翻译:尽管在机器学习中普遍使用随机优化算法,但这些算法对现实的非混凝土环境中一般化表现的准确影响仍然不甚为人理解。在本文件中,我们提供了一个广泛的理论框架,用于调查基于其动态的随机优化优化器的一般化特性。我们首先证明,从著名的Fernique-Talagrand功能的优化动态到优化机的轨迹,存在着一种可归结于优化机的优化性动态的概括性。在没有进一步假设的情况下,这种基于数据和算法的界限被证明是尽可能精确的。我们然后通过利用随机优化器的马尔科维亚结构,从与优化算法相关的(数据依赖性)过渡核心中得出总体性框架。根据最近的工作揭示了一般化与超紧的随机优化操作力之间的关联,我们将一般偏差与本地的尾部行为联系起来。我们说明,当地权力法的伸缩法伸缩和神经内核的精度是我们总体理论化的精锐性网络的一个有效功能性层面。我们展示了高层次的演化结果。