Estimating uncertainty of machine learning models is essential to assess the quality of the predictions that these models provide. However, there are several factors that influence the quality of uncertainty estimates, one of which is the amount of model misspecification. Model misspecification always exists as models are mere simplifications or approximations to reality. The question arises whether the estimated uncertainty under model misspecification is reliable or not. In this paper, we argue that model misspecification should receive more attention, by providing thought experiments and contextualizing these with relevant literature.
翻译:估计机器学习模型的不确定性对于评估这些模型提供的预测质量至关重要,但有几个因素影响不确定性估计的质量,其中一个因素是模型误差的程度。模型误差总是存在,因为模型只是简化或接近于现实。问题在于模型误差的估计不确定性是否可靠。在本文件中,我们认为模型误差应受到更多的注意,办法是提供思考实验并将这些实验与相关的文献联系起来。