This work discusses how to derive upper bounds for the expected generalisation error of supervised learning algorithms by means of the chaining technique. By developing a general theoretical framework, we establish a duality between generalisation bounds based on the regularity of the loss function, and their chained counterparts, which can be obtained by lifting the regularity assumption from the loss onto its gradient. This allows us to re-derive the chaining mutual information bound from the literature, and to obtain novel chained information-theoretic generalisation bounds, based on the Wasserstein distance and other probability metrics. We show on some toy examples that the chained generalisation bound can be significantly tighter than its standard counterpart, particularly when the distribution of the hypotheses selected by the algorithm is very concentrated. Keywords: Generalisation bounds; Chaining; Information-theoretic bounds; Mutual information; Wasserstein distance; PAC-Bayes.
翻译:这项工作讨论了如何通过链锁技术获取监督学习算法预期一般化错误的上限。 通过开发一个一般理论框架,我们建立了基于损失函数的规律性的一般化界限与其链条对应方之间的双重性,可以通过将损失的规律性假设提升到梯度来获得。这使我们能够重新生成从文献中将相互信息捆绑在一起的链条,并获得基于瓦瑟斯坦距离和其他概率指标的新的链条信息理论一般化界限。我们用一些微小的例子显示,链条一般化约束可以大大超过标准对应方,特别是当该算法所选定的假设的分布非常集中时。关键词:一般化界限;链条;信息-理论界限;相互信息;瓦瑟斯坦距离;PAC-Bayes。