A recent line of works, initiated by \cite{russo2016controlling} and \cite{xu2017information}, has shown that the generalization error of a learning algorithm can be upper bounded by information measures. In most of the relevant works, the convergence rate of the expected generalization error is in the form of O(\sqrt{\lambda/{n}}) where \lambda is some information-theoretic quantities such as the mutual information between the data sample and the learned hypothesis. However, such a learning rate is typically considered to be "slow", compared to a "fast rate" of O(1/n) in many learning scenarios. In this work, we first show that the square root does not necessarily imply a slow rate, and a fast rate (O(1/n)) result can still be obtained using this bound under appropriate assumptions. Furthermore, we identify the key conditions needed for the fast rate generalization error, which we call the (\eta,c)-central condition. Under this condition, we give information-theoretic bounds on the generalization error and excess risk, with a convergence rate of O(\lambda/{n}) for specific learning algorithms such as empirical risk minimization. Finally, analytical examples are given to show the effectiveness of the bounds.
翻译:由 \ cite{ russo2016 controling} 和\ cite{xu2017 info} 发起的最新一行工程显示, 学习算法的概括性错误可以由信息量的衡量标准上层圈套。 在大多数相关工程中, 期望的概括性错误的趋同率是O( sqrt_lambda/{n}) 的形式, 其中,\ lambda 是某些信息理论数量, 例如数据样本和所学假设之间的相互信息。 然而, 在许多学习设想中, 这种学习率通常被视为“ 低 ”, 与 O( 1/ n) 的“ 快速速率 ” 相比。 在这项工作中, 我们首先显示, 平方根不一定意味着慢速率, 而快速率( O( 1/ n) 的趋同率( O1/ ) 的趋同率( O) 仍然可以在适当的假设下利用这一约束性结果获得。 此外, 我们确定快速率总化错误所需的关键条件, 我们称之为 (\ eta, c)- 中心条件。 在此条件下, 我们给出关于一般化错误的信息- 和超重风险的信息- 和超重风险的分析率, 作为O\ 最起码的缩缩率分析率 显示特定的实验率 。