Although neural networks have seen tremendous success as predictive models in a variety of domains, they can be overly confident in their predictions on out-of-distribution (OOD) data. To be viable for safety-critical applications, like autonomous vehicles, neural networks must accurately estimate their epistemic or model uncertainty, achieving a level of system self-awareness. Techniques for epistemic uncertainty quantification often require OOD data during training or multiple neural network forward passes during inference. These approaches may not be suitable for real-time performance on high-dimensional inputs. Furthermore, existing methods lack interpretability of the estimated uncertainty, which limits their usefulness both to engineers for further system development and to downstream modules in the autonomy stack. We propose the use of evidential deep learning to estimate the epistemic uncertainty over a low-dimensional, interpretable latent space in a trajectory prediction setting. We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among the semantic concepts: past agent behavior, road structure, and social context. We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines. Our code is available at: https://github.com/sisl/InterpretableSelfAwarePrediction.
翻译:虽然神经网络在各个领域作为预测模型取得了巨大的成功,但它们对预测分配外数据过于自信。为了对安全关键应用,例如自主车辆等安全关键应用具有可行性,神经网络必须准确估计其认知性或模型不确定性,实现系统自觉意识水平。认知不确定性量化技术往往要求在培训期间或多个神经网络在推断期间的前方传递过程中需要OOOD数据。这些方法可能不适合高维投入的实时性能。此外,现有方法缺乏对估计不确定性的解释性,这些不确定性限制了工程师对进一步系统开发和自治堆的下游模块的有用性。我们提议使用证据深度学习来估计在低度、可解释的轨道预测环境中的认知性潜在空间。我们引入了一种可解释的轨迹预测模式,在语义概念中传播不确定性:过去的代理行为、道路结构和社会背景。我们验证了我们关于现实世界自主驱动数据的方法,展示了超越州/地基基线的优性表现。我们的代码可在轨道预测中找到: http://Asselfliververial。