Multiple robots could perceive a scene (e.g., detect objects) collaboratively better than individuals, although easily suffer from adversarial attacks when using deep learning. This could be addressed by the adversarial defense, but its training requires the often-unknown attacking mechanism. Differently, we propose ROBOSAC, a novel sampling-based defense strategy generalizable to unseen attackers. Our key idea is that collaborative perception should lead to consensus rather than dissensus in results compared to individual perception. This leads to our hypothesize-and-verify framework: perception results with and without collaboration from a random subset of teammates are compared until reaching a consensus. In such a framework, more teammates in the sampled subset often entail better perception performance but require longer sampling time to reject potential attackers. Thus, we derive how many sampling trials are needed to ensure the desired size of an attacker-free subset, or equivalently, the maximum size of such a subset that we can successfully sample within a given number of trials. We validate our method on the task of collaborative 3D object detection in autonomous driving scenarios.
翻译:多机器人群体知觉协作比单个机器人更强,然而使用深度学习来实现机器人群体感知时非常容易受到对抗性攻击。对抗性防御可以帮助解决这一问题,但其训练需要了解对抗攻击的机制,这在实际中很难做到。本文提出了一种新的防御策略 ROBOSAC,该策略基于采样,可以应用于不同类型的攻击者,具有一定的泛化性。本文思路主要是通过团队成员间的共识来实现机器人群体感知的对抗防御,而不是依赖于单个机器人的感知结果。很多队友参与的一次采样会带来更好的性能表现,但也需要更长的时间来拒绝潜在的攻击者。因此,本文推到出了一种确定所需采样试次数的方法,以保证检测结果不受攻击的子集的大小,或者在给定采样次数内成功采集这样的子集的最大大小。本文将 ROBOSAC 应用到了自动驾驶场景中的三维物体检测任务,并进行了实验验证。