We establish a disintegrated PAC-Bayesian bound, for classifiers that are trained via continuous-time (non-stochastic) gradient descent. Contrarily to what is standard in the PAC-Bayesian setting, our result applies to a training algorithm that is deterministic, conditioned on a random initialisation, without requiring any $\textit{de-randomisation}$ step. We provide a broad discussion of the main features of the bound that we propose, and we study analytically and empirically its behaviour on linear models, finding promising results.
翻译:我们为通过连续时间(非调查性)梯度下降进行训练的分类人员建立了一个分解的PAC-Bayesian捆绑。与PAC-Bayesian环境的标准相反,我们的结果适用于一种确定性的培训算法,这种算法以随机初始化为条件,不要求任何$\textit{de-randomization}$步骤。我们广泛讨论了我们提议的约束的主要特点,我们用线性模型对它的行为进行分析和经验研究,找到有希望的结果。