We prove calibration guarantees for the popular histogram binning (also called uniform-mass binning) method of Zadrozny and Elkan [2001]. Histogram binning has displayed strong practical performance, but theoretical guarantees have only been shown for sample split versions that avoid 'double dipping' the data. We demonstrate that the statistical cost of sample splitting is practically significant on a credit default dataset. We then prove calibration guarantees for the original method that double dips the data, using a certain Markov property of order statistics. Based on our results, we make practical recommendations for choosing the number of bins in histogram binning. In our illustrative simulations, we propose a new tool for assessing calibration -- validity plots -- which provide more information than an ECE estimate. Code for this work will be made publicly available at https://github.com/aigen/df-posthoc-calibration.
翻译:我们证明Zadrozny和Elkan [2001] 流行的直方图(也称统一马斯 binning)方法的校准保证。 直方图 binning 展示了很强的实际性能, 但理论保证只展示了样本分解版本, 避免“ 双重稀释” 数据。 我们证明, 样本分解的统计成本对于信用违约数据集来说实际上很重要。 然后, 我们用某些顺序统计的Markov属性来证明数据双重稀释的原始方法的校准保证。 根据我们的结果, 我们提出了在直方图 binning 中选择文件箱数的实用建议。 在我们的示例模拟中, 我们提出了一个新的评估校准工具 -- -- 有效性图 -- -- 提供了比ECE估计更多的信息。 这项工作的代码将在 https://github.com/aigen/df-poshoc-calibrailation上公布。