In this paper, we show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving, by triggering a fallback behavior if a target accuracy cannot be guaranteed. We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function. We propose to estimate this dissimilarity by training a deep neural architecture in parallel to the task-specific network. It allows this observer to be dedicated to the uncertainty estimation, and let the task-specific network make predictions. We propose to use self-supervision to train the observer, which implies that our method does not require additional training data. We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods (e.g. MCDropout), while delivering better results on safety-oriented evaluation metrics on the CamVid dataset, especially in the case of glare artifacts.
翻译:在本文中,我们展示了如何利用不确定性估算,以便在无法保证目标准确性的情况下,通过在目标准确性的情况下触发后退行为,从而在自主驾驶中实现安全关键图像分割。我们引入了基于不同性功能所测量的不同预测的新不确定性措施。我们提议通过培训一个与具体任务网络平行的深层神经结构来估计这种差异性。这让这位观察者能够致力于不确定性估算,并让具体任务网络做出预测。我们提议使用自我监督来培训观察者,这意味着我们的方法不需要额外的培训数据。我们实验性地表明,我们所提议的方法在推断时间的计算密集度远低于相互竞争的方法(如MCDropout),同时在CamVid数据集上的安全导向评价指标方面,特别是在玻璃文物方面,取得更好的结果。