In this paper, we leverage stochastic projection and lossy compression to establish new conditional mutual information (CMI) bounds on the generalization error of statistical learning algorithms. It is shown that these bounds are generally tighter than the existing ones. In particular, we prove that for certain problem instances for which existing MI and CMI bounds were recently shown in Attias et al. [2024] and Livni [2023] to become vacuous or fail to describe the right generalization behavior, our bounds yield suitable generalization guarantees of the order of $\mathcal{O}(1/\sqrt{n})$, where $n$ is the size of the training dataset. Furthermore, we use our bounds to investigate the problem of data "memorization" raised in those works, and which asserts that there are learning problem instances for which any learning algorithm that has good prediction there exist distributions under which the algorithm must "memorize" a big fraction of the training dataset. We show that for every learning algorithm, there exists an auxiliary algorithm that does not memorize and which yields comparable generalization error for any data distribution. In part, this shows that memorization is not necessary for good generalization.
翻译:本文利用随机投影与有损压缩技术,为统计学习算法的泛化误差建立了新的条件互信息(CMI)界。研究表明,这些界通常比现有界更为紧致。特别地,我们证明对于Attias等人[2024]和Livni[2023]近期指出的某些问题实例——其中现有互信息(MI)与条件互信息界或趋于无效,或无法准确描述泛化行为——我们的界能够给出量级为$\mathcal{O}(1/\sqrt{n})$的合适泛化保证,其中$n$为训练数据集规模。此外,我们运用所提出的界探究了上述工作中提出的数据“记忆”问题,该问题指出存在某些学习问题实例,使得任何具有良好预测性能的学习算法,在特定分布下必然“记忆”训练数据集中很大比例的样本。我们证明对于任意学习算法,均存在一个不进行记忆的辅助算法,该算法对任意数据分布均可达到相当的泛化误差。这部分结果表明,良好的泛化性能并不必然要求记忆行为。