Calibration of neural networks is a topical problem that is becoming increasingly important for real-world use of neural networks. The problem is especially noticeable when using modern neural networks, for which there is significant difference between the model confidence and the confidence it should have. Various strategies have been successfully proposed, yet there is more space for improvements. We propose a novel approach that introduces a differentiable metric for expected calibration error and successfully uses it as an objective for meta-learning, achieving competitive results with state-of-the-art approaches. Our approach presents a new direction of using meta-learning to directly optimize model calibration, which we believe will inspire further work in this promising and new direction.
翻译:神经网络的校准是一个日新月异的问题,对于现实世界使用神经网络越来越重要,在使用现代神经网络时,这个问题尤其明显,因为对于现代神经网络,模型的信心和它应当具有的信心之间有重大差别。已经成功地提出了各种战略,但还有更多的改进空间。我们提出了一个新颖的方法,为预期校准错误引入一个不同的衡量标准,并成功地将它作为元学习的目标,通过最先进的方法取得竞争性成果。我们的方法提出了使用元学习来直接优化模型校准的新方向,我们相信这将激励在这一有希望的新方向上进一步开展工作。