Reliable uncertainty quantification in deep neural networks is very crucial in safety-critical applications such as automated driving for trustworthy and informed decision-making. Assessing the quality of uncertainty estimates is challenging as ground truth for uncertainty estimates is not available. Ideally, in a well-calibrated model, uncertainty estimates should perfectly correlate with model error. We propose a novel error aligned uncertainty optimization method and introduce a trainable loss function to guide the models to yield good quality uncertainty estimates aligning with the model error. Our approach targets continuous structured prediction and regression tasks, and is evaluated on multiple datasets including a large-scale vehicle motion prediction task involving real-world distributional shifts. We demonstrate that our method improves average displacement error by 1.69% and 4.69%, and the uncertainty correlation with model error by 17.22% and 19.13% as quantified by Pearson correlation coefficient on two state-of-the-art baselines.
翻译:在深神经网络中可靠地量化不确定性对于安全关键应用(如自动驱动进行可信和知情的决策)至关重要。评估不确定性估算的质量具有挑战性,因为无法获得不确定性估算的地面真相。理想的情况是,在经过充分校正的模型中,不确定性估算应当与模型错误完全相关。我们提出了一个新颖的与错误一致的不确定性优化方法,并引入了可培训的损失函数,以指导模型得出与模型错误一致的高质量不确定性估算。我们的方法针对的是连续的结构性预测和回归任务,并用多个数据集进行评估,包括涉及现实世界分布变化的大规模车辆运动预测任务。我们证明,我们的方法将平均流离失所误差改善1.69%和4.69%,以及比模型误差增加17.22%和19.13%的不确定性相关性,由皮尔逊两个最先进的基线相关系数量化。