Multicalibration is a notion of fairness that aims to provide accurate predictions across a large set of groups. Multicalibration is known to be a different goal than loss minimization, even for simple predictors such as linear functions. In this note, we show that for (almost all) large neural network sizes, optimally minimizing squared error leads to multicalibration. Our results are about representational aspects of neural networks, and not about algorithmic or sample complexity considerations. Previous such results were known only for predictors that were nearly Bayes-optimal and were therefore representation independent. We emphasize that our results do not apply to specific algorithms for optimizing neural networks, such as SGD, and they should not be interpreted as "fairness comes for free from optimizing neural networks".
翻译:多重校准是一种公正性概念,旨在在大量族群中提供准确的预测。多重校准已知是与损失最小化不同的目标,即使是对于简单预测器如线性函数也是如此。在本论文中,我们表明对于(几乎所有)大型神经网络规模,最优地最小化二次误差将导致多重校准。我们的结果与神经网络的表征性方面有关,而不是算法或样本复杂度的考虑。先前的结果仅适用于几乎贝叶斯最优的预测器,因此是表示独立的。我们强调,我们的结果不适用于神经网络优化的特定算法(例如SGD),不应将其解释为“通过优化神经网络获得公正性是免费的”。