In driving scenarios with poor visibility or occlusions, it is important that the autonomous vehicle would take into account all the uncertainties when making driving decisions, including choice of a safe speed. The grid-based perception outputs, such as occupancy grids, and object-based outputs, such as lists of detected objects, must then be accompanied by well-calibrated uncertainty estimates. We highlight limitations in the state-of-the-art and propose a more complete set of uncertainties to be reported, particularly including undetected-object-ahead probabilities. We suggest a novel way to get these probabilistic outputs from bird's-eye-view probabilistic semantic segmentation, in the example of the FIERY model. We demonstrate that the obtained probabilities are not calibrated out-of-the-box and propose methods to achieve well-calibrated uncertainties.
翻译:在能见度低或闭塞性差的驾驶场景中,自主车辆在作出驾驶决定时必须考虑到所有不确定因素,包括选择安全速度。基于网格的感知输出,例如占用网格,和基于目标的产出,例如已检测到的物体清单,必须同时附上经充分校准的不确定性估计数。我们强调最新技术的局限性,并提出一套更完整的有待报告的不确定因素,特别是未检测到的物体头概率。我们建议一种新办法,用FIERY模型的例子,从鸟类眼视概率概率性精度分解中获取这些概率性产出。我们证明获得的概率不是校准出于箱的,而是提出实现经充分校准的不确定性的方法。