We study the connection between multicalibration and boosting for squared error regression. First we prove a useful characterization of multicalibration in terms of a ``swap regret'' like condition on squared error. Using this characterization, we give an exceedingly simple algorithm that can be analyzed both as a boosting algorithm for regression and as a multicalibration algorithm for a class H that makes use only of a standard squared error regression oracle for H. We give a weak learning assumption on H that ensures convergence to Bayes optimality without the need to make any realizability assumptions -- giving us an agnostic boosting algorithm for regression. We then show that our weak learning assumption on H is both necessary and sufficient for multicalibration with respect to H to imply Bayes optimality. We also show that if H satisfies our weak learning condition relative to another class C then multicalibration with respect to H implies multicalibration with respect to C. Finally we investigate the empirical performance of our algorithm experimentally using an open source implementation that we make available. Our code repository can be found at https://github.com/Declancharrison/Level-Set-Boosting.
翻译:我们研究多校准与平方差错回归的推力之间的关联。 首先,我们证明以“ swap 遗憾' ” 等条件对多校准在正方差错误上是一种有用的定性。 使用这种定性,我们给出了一种极其简单的算法,既可以作为回归的促进算法加以分析,也可以作为H类的多校准算法加以分析,H类只使用标准的平方差错回归或缩影。 我们给出了对H类的微弱学习假设,确保了对巴伊斯最佳的趋同,而不需要做出任何真实的假设 -- -- 给予我们一个反回归的敏感推算法。 然后,我们展示了我们在H类上的微弱学习假设对于多重校准是必要和充分的,意味着Bayes的最佳性。 我们还表明,如果H能满足我们相对于另一类C的薄弱学习条件,那么H类的多校准就意味着C的多重校准。 最后,我们用我们提供的开放源执行来研究我们算算算法的经验性表现。 我们的代码存储库可以在 https://githbubb. com- Decasetonalalmentaltial