Deep learning has the potential to dramatically impact navigation and tracking state estimation problems critical to autonomous vehicles and robotics. Measurement uncertainties in state estimation systems based on Kalman and other Bayes filters are typically assumed to be a fixed covariance matrix. This assumption is risky, particularly for "black box" deep learning models, in which uncertainty can vary dramatically and unexpectedly. Accurate quantification of multivariate uncertainty will allow for the full potential of deep learning to be used more safely and reliably in these applications. We show how to model multivariate uncertainty for regression problems with neural networks, incorporating both aleatoric and epistemic sources of heteroscedastic uncertainty. We train a deep uncertainty covariance matrix model in two ways: directly using a multivariate Gaussian density loss function, and indirectly using end-to-end training through a Kalman filter. We experimentally show in a visual tracking problem the large impact that accurate multivariate uncertainty quantification can have on Kalman filter performance for both in-domain and out-of-domain evaluation data. We additionally show in a challenging visual odometry problem how end-to-end filter training can allow uncertainty predictions to compensate for filter weaknesses.
翻译:深层学习有可能极大地影响对自主飞行器和机器人至关重要的导航和跟踪国家估算问题。基于卡尔曼和其他贝斯过滤器的国家估算系统中的不确定性通常被假定为一个固定的共变矩阵。这一假设是危险的,特别是对于“黑盒子”深层学习模型而言,不确定性可能会发生巨大和意外的变化。对多变量不确定性的精确量化将允许在这些应用中更安全和更可靠地使用深层学习的全部潜力。我们展示了如何为神经网络的回归问题模拟多变量不确定性,其中既包括透析和透析的异谱不确定性源。我们以两种方式培训一个深度的不确定性共变异矩阵模型:直接使用多变量高斯密度损失功能,并通过卡尔曼过滤器间接使用端对端培训。我们实验性地显示,在视觉跟踪问题中,准确的多变量不确定性量化可能对卡尔曼神经网络的回归性表现产生巨大影响,同时包括透析和外向外评估数据。我们用具有挑战性的可视辨测地测量的软度问题来补偿最终至过滤器的不确定性。