In this paper, we introduce the notion of Cooperative Perception Error Models (coPEMs) towards achieving an effective and efficient integration of V2X solutions within a virtual test environment. We focus our analysis on the occlusion problem in the (onboard) perception of Autonomous Vehicles (AV), which can manifest as misdetection errors on the occluded objects. Cooperative perception (CP) solutions based on Vehicle-to-Everything (V2X) communications aim to avoid such issues by cooperatively leveraging additional points of view for the world around the AV. This approach usually requires many sensors, mainly cameras and LiDARs, to be deployed simultaneously in the environment either as part of the road infrastructure or on other traffic vehicles. However, implementing a large number of sensor models in a virtual simulation pipeline is often prohibitively computationally expensive. Therefore, in this paper, we rely on extending Perception Error Models (PEMs) to efficiently implement such cooperative perception solutions along with the errors and uncertainties associated with them. We demonstrate the approach by comparing the safety achievable by an AV challenged with a traffic scenario where occlusion is the primary cause of a potential collision.
翻译:在本文中,我们引入了 " 合作感知错误模型 " 的概念,目的是在虚拟测试环境中实现V2X解决方案的有效和高效整合,我们集中分析自主车辆(机上)感知中的隔离问题,这可以表现为对隐蔽物体的误测错误,基于 " 车辆对物 " (V2X)通信的合作感知(CP)解决方案旨在通过合作为AV周围的全世界提供更多的观点点来避免此类问题。这一方法通常需要同时在环境中部署许多传感器,主要是照相机和LIDARs,作为公路基础设施的一部分或其他交通车辆。然而,在虚拟模拟管道中实施大量感知模型往往在计算上代价高昂,因此,我们在本文中依靠扩大感知错误模型,以便有效地实施与这些错误和不确定性相关的合作感知解决方案。我们通过将受到挑战的AV所能够实现的安全与交通情景进行比较,从而将隐蔽作为潜在碰撞的首要原因,从而证明这一方法。