We study the mutual information between (certain summaries of) the output of a learning algorithm and its $n$ training data, conditional on a supersample of $n+1$ i.i.d. data from which the training data is chosen at random without replacement. These leave-one-out variants of the conditional mutual information (CMI) of an algorithm (Steinke and Zakynthinou, 2020) are also seen to control the mean generalization error of learning algorithms with bounded loss functions. For learning algorithms achieving zero empirical risk under 0-1 loss (i.e., interpolating algorithms), we provide an explicit connection between leave-one-out CMI and the classical leave-one-out error estimate of the risk. Using this connection, we obtain upper and lower bounds on risk in terms of the (evaluated) leave-one-out CMI. When the limiting risk is constant or decays polynomially, the bounds converge to within a constant factor of two. As an application, we analyze the population risk of the one-inclusion graph algorithm, a general-purpose transductive learning algorithm for VC classes in the realizable setting. Using leave-one-out CMI, we match the optimal bound for learning VC classes in the realizable setting, answering an open challenge raised by Steinke and Zakynthinou (2020). Finally, in order to understand the role of leave-one-out CMI in studying generalization, we place leave-one-out CMI in a hierarchy of measures, with a novel unconditional mutual information at the root. For 0-1 loss and interpolating learning algorithms, this mutual information is observed to be precisely the risk.
翻译:我们研究学习算法产出及其以美元培训数据之间的相互信息(某些摘要),条件是以超模以美元+1美元(i.d.d.)的数据为条件,随机选择培训数据。这些有条件的相互信息(Steinke和Zakynthinou,2020年)的离异变体(CMI)的有条件相互信息(CMI)的离异变体(CMI)也被视为控制带有受约束损失功能的学习算法的中度一般错误。对于在0-1损失(即内插算法)下实现零实验风险的学习算法,我们提供了一种明确的连接,即离校的离异数据(n+1美元)和典型的离异数据。使用这种连接,我们获得了(经评估的)离异共算法(Steinkekekeke)的上下限风险的上限。当限制风险持续或衰减时,对两个不变的系数的结合。作为应用,我们分析了在C-lone的离值图表算法中进行人口风险分析,在C-C-C级的离异等级中,在最终学习C-C级的相互定位的离值学习中,在C级中进行真正的离位学习的跨级学习。