Emergency vehicles in service have right-of-way over all other vehicles. Hence, all other vehicles are supposed to take proper actions to yield emergency vehicles with active sirens. As this task requires the cooperation between ears and eyes for human drivers, it also needs audio detection as a supplement to vision-based algorithms for fully autonomous driving vehicles. In urban driving scenarios, we need to know both the existence of emergency vehicles and their relative positions to us to decide the proper actions. We present a novel system from collecting the real-world siren data to the deployment of models using only two cost-efficient microphones. We are able to achieve promising performance for each task separately, especially within the crucial 10m to 50m distance range to react (the size of our ego vehicle is around 5m in length and 2m in width). The recall rate to determine the existence of sirens is 99.16% , the median and mean angle absolute error is 9.64{\deg} and 19.18{\deg} respectively, and the median and mean distance absolute error of 9.30m and 10.58m respectively within that range. We also benchmark various machine learning approaches that can determine the siren existence and sound source localization which includes direction and distance simultaneously within 50ms of latency.
翻译:在城市驾驶中,我们需要知道紧急车辆的存在及其相对位置,以决定适当的行动。我们提出了一个新系统,从收集真实世界的警报数据到仅仅使用两部具有成本效益的麦克风来部署模型。我们能够为每项任务分别实现有希望的性能,特别是在关键的10米至50米的距离范围内(我们自用车辆的大小在5米左右,宽度在2米之间)作出反应。我们还可以为各种机器学习方法设定基准,这些方法可以确定50米的距离和声音源。