Uncertainty quantification (UQ) is crucial for deploying machine learning models in high-stakes applications, where overconfident predictions can lead to serious consequences. An effective UQ method must balance computational efficiency with the ability to generalize across diverse scenarios. Evidential deep learning (EDL) achieves efficiency by modeling uncertainty through the prediction of a Dirichlet distribution over class probabilities. However, the restrictive assumption of Dirichlet-distributed class probabilities limits EDL's robustness, particularly in complex or unforeseen situations. To address this, we propose \textit{flexible evidential deep learning} ($\mathcal{F}$-EDL), which extends EDL by predicting a flexible Dirichlet distribution -- a generalization of the Dirichlet distribution -- over class probabilities. This approach provides a more expressive and adaptive representation of uncertainty, significantly enhancing UQ generalization and reliability under challenging scenarios. We theoretically establish several advantages of $\mathcal{F}$-EDL and empirically demonstrate its state-of-the-art UQ performance across diverse evaluation settings, including classical, long-tailed, and noisy in-distribution scenarios.
翻译:置信度量化对于在关键应用中部署机器学习模型至关重要,在这些应用中过度自信的预测可能导致严重后果。有效的置信度量化方法必须在计算效率与跨多样化场景的泛化能力之间取得平衡。证据深度学习通过预测类别概率上的狄利克雷分布来建模不确定性,从而实现计算效率。然而,狄利克雷分布类别概率的严格假设限制了EDL的鲁棒性,特别是在复杂或意外情况下。为解决这一问题,我们提出\textit{灵活证据深度学习}($\mathcal{F}$-EDL),该方法通过预测类别概率上的灵活狄利克雷分布(狄利克雷分布的推广形式)来扩展EDL。这种方法提供了更具表达力和自适应性的不确定性表征,显著提升了在挑战性场景下的置信度量化泛化能力与可靠性。我们从理论上建立了$\mathcal{F}$-EDL的若干优势,并通过实证研究在多样化评估场景(包括经典场景、长尾分布场景和带噪声的域内场景)中展示了其最先进的置信度量化性能。