Modern machine learning models with high accuracy are often miscalibrated -- the predicted top probability does not reflect the actual accuracy, and tends to be over-confident. It is commonly believed that such over-confidence is mainly due to over-parametrization, in particular when the model is large enough to memorize the training data and maximize the confidence. In this paper, we show theoretically that over-parametrization is not the only reason for over-confidence. We prove that logistic regression is inherently over-confident, in the realizable, under-parametrized setting where the data is generated from the logistic model, and the sample size is much larger than the number of parameters. Further, this over-confidence happens for general well-specified binary classification problems as long as the activation is symmetric and concave on the positive part. Perhaps surprisingly, we also show that over-confidence is not always the case -- there exists another activation function (and a suitable loss function) under which the learned classifier is under-confident at some probability values. Overall, our theory provides a precise characterization of calibration in realizable binary classification, which we verify on simulations and real data experiments.
翻译:高度精准的现代机器学习模型往往被错误地校准 -- 预测最高概率并不反映实际准确性,而且往往过于自信。人们通常认为,这种过度自信的主要原因是过度平衡,特别是当模型大到足以记住培训数据并最大限度地提高信任度时。在本文中,我们从理论上表明,过度平衡并不是过度自信的唯一原因。我们证明,后勤回归本质上是过于自信的,在从后勤模型生成数据、样本大小远远大于参数数目的可实现和偏差的环境下。此外,这种过度自信发生于一般的、精准的二进制分类问题,只要激活是对称和正反调的。也许令人惊讶的是,我们还表明,过度平衡并非始终是过分自信的唯一原因。我们证明存在另一个激活功能(和适当的损失功能),在这个功能下,学习到的分类者对某种概率值缺乏信心。总体而言,我们的理论在现实的二进制二进制分类中提供了精确的精确的校准定性,我们可以核查真实的模拟数据。