Modern neural networks do not always produce well-calibrated predictions, even when trained with a proper scoring function such as cross-entropy. In classification settings, simple methods such as isotonic regression or temperature scaling may be used in conjunction with a held-out dataset to calibrate model outputs. However, extending these methods to structured prediction is not always straightforward or effective; furthermore, a held-out calibration set may not always be available. In this paper, we study ensemble distillation as a general framework for producing well-calibrated structured prediction models while avoiding the prohibitive inference-time cost of ensembles. We validate this framework on two tasks: named-entity recognition and machine translation. We find that, across both tasks, ensemble distillation produces models which retain much of, and occasionally improve upon, the performance and calibration benefits of ensembles, while only requiring a single model during test-time.
翻译:现代神经网络不一定总能产生有条理的预测,即使经过过过交叉有机体等适当评分功能的培训。在分类设置中,可以使用等离子回归或温度缩放等简单方法与固定数据集一起使用,以校准模型输出。然而,将这些方法扩大到结构化预测并不总是简单或有效;此外,可能并不总是有固定的校准装置。在本文件中,我们研究混合蒸馏作为制作有理结构化预测模型的一般框架,同时避免聚合物令人望而却步的推论时间成本。我们在两个任务上验证了这一框架:识别和机器翻译。我们发现,在两个任务中,混合蒸馏产生模型,这些模型保留了聚合物的性能和校准效益,有时会改进,而在试验期间只需要一个单一模型。