The last decade's research in artificial intelligence had a significant impact on the advance of autonomous driving. Yet, safety remains a major concern when it comes to deploying such systems in high-risk environments. The objective of this thesis is to develop methodological tools which provide reliable uncertainty estimates for deep neural networks. First, we introduce a new criterion to reliably estimate model confidence: the true class probability (TCP). We show that TCP offers better properties for failure prediction than current uncertainty measures. Since the true class is by essence unknown at test time, we propose to learn TCP criterion from data with an auxiliary model, introducing a specific learning scheme adapted to this context. The relevance of the proposed approach is validated on image classification and semantic segmentation datasets. Then, we extend our learned confidence approach to the task of domain adaptation where it improves the selection of pseudo-labels in self-training methods. Finally, we tackle the challenge of jointly detecting misclassification and out-of-distributions samples by introducing a new uncertainty measure based on evidential models and defined on the simplex.
翻译:过去十年对人工智能的研究对自主驾驶的推进产生了重大影响。然而,在高风险环境中部署这类系统时,安全仍然是一个令人关切的主要问题。该论文的目的是开发方法工具,为深神经网络提供可靠的不确定性估计。首先,我们引入了一个新的标准,可靠地估计模型信任度:真正的等级概率(TCP)。我们表明,TCP比目前的不确定性措施为失败预测提供了更好的属性。由于真正的类别在测试时本质上是未知的,因此我们提议从数据中学习TCP标准,采用一个辅助模型,引入一个适合这一背景的具体学习计划。拟议方法的相关性在图像分类和语义分解数据集上得到验证。然后,我们将我们所学到的信心方法扩大到领域适应任务,在其中改进自培训方法中假标签的选择。最后,我们应对联合检测错误分类和分配以外的样本的挑战,根据证据模型和简单x定义采用新的不确定性措施。