Uncertainty quantification methods are required in autonomous systems that include deep learning (DL) components to assess the confidence of their estimations. However, to successfully deploy DL components in safety-critical autonomous systems, they should also handle uncertainty at the input rather than only at the output of the DL components. Considering a probability distribution in the input enables the propagation of uncertainty through different components to provide a representative measure of the overall system uncertainty. In this position paper, we propose a method to account for uncertainty at the input of Bayesian Deep Learning control policies for Aerial Navigation. Our early experiments show that the proposed method improves the robustness of the navigation policy in Out-of-Distribution (OoD) scenarios.
翻译:在包括深学习(DL)组成部分的自主系统中,需要确定量化方法,以评估其估计的可信度;然而,为了成功地在安全关键自主系统中部署DL组成部分,它们还应处理输入时的不确定性,而不仅仅是DL组成部分的输出时的不确定性;考虑到输入的概率分布能够通过不同组成部分传播不确定性,从而提供对整个系统不确定性的有代表性的度量;在本立场文件中,我们提议一种方法,在Bayesian深海航空导航控制政策的投入中说明不确定性。我们的早期实验表明,拟议的方法可以提高分配外(OoD)情景导航政策的稳健性。