Uncertainty estimation bears the potential to make deep learning (DL) systems more reliable. Standard techniques for uncertainty estimation, however, come along with specific combinations of strengths and weaknesses, e.g., with respect to estimation quality, generalization abilities and computational complexity. To actually harness the potential of uncertainty quantification, estimators are required whose properties closely match the requirements of a given use case. In this work, we propose a framework that, firstly, structures and shapes these requirements, secondly, guides the selection of a suitable uncertainty estimation method and, thirdly, provides strategies to validate this choice and to uncover structural weaknesses. By contributing tailored uncertainty estimation in this sense, our framework helps to foster trustworthy DL systems. Moreover, it anticipates prospective machine learning regulations that require, e.g., in the EU, evidences for the technical appropriateness of machine learning systems. Our framework provides such evidences for system components modeling uncertainty.
翻译:不确定性估计有可能使深层学习(DL)系统更加可靠。不过,不确定性估计的标准技术结合了优缺点的具体组合,例如估计质量、一般化能力和计算复杂性。为了实际利用不确定性的量化潜力,需要估算其性质与特定使用案例的要求非常接近的估算员。在这项工作中,我们提出了一个框架,首先,为选择适当的不确定性估计方法提供指导,其次,为选择适当的不确定性估计方法提供指导,第三,为验证这一选择和发现结构性弱点提供战略。通过提供这一意义上的有针对性的不确定性估计,我们的框架有助于培养可靠的DL系统。此外,它预计未来的机器学习规则,例如,在欧盟,需要机器学习系统的技术适当性证据。我们的框架为系统组成部分的不确定性建模提供了此类证据。