Coordination in a large number of networked robots is a challenging task, especially when robots are constantly moving around the environment and there are malicious attacks within the network. Various approaches in the literature exist for detecting malicious robots, such as message sampling or suspicious behavior analysis. However, these approaches require every robot to sample every other robot in the network, leading to a slow detection process that degrades team performance. This paper introduces a method that significantly decreases the detection time for legitimate robots to identify malicious robots in a scenario where legitimate robots are randomly moving around the environment. Our method leverages the concept of ``Dynamic Crowd Vetting" by utilizing observations from random encounters and trusted neighboring robots' opinions to quickly improve the accuracy of detecting malicious robots. The key intuition is that as long as each legitimate robot accurately estimates the legitimacy of at least some fixed subset of the team, the second-hand information they receive from trusted neighbors is enough to correct any misclassifications and provide accurate trust estimations of the rest of the team. We show that the size of this fixed subset can be characterized as a function of fundamental graph and random walk properties. Furthermore, we formally show that as the number of robots in the team increases the detection time remains constant. We develop a closed form expression for the critical number of time-steps required for our algorithm to successfully identify the true legitimacy of each robot to within a specified failure probability. Our theoretical results are validated through simulations demonstrating significant reductions in detection time when compared to previous works that do not leverage trusted neighbor information.
翻译:大规模网络机器人的协调是个挑战性任务,尤其是当机器人不断在环境中移动,网络中存在恶意攻击时。文献中存在各种检测恶意机器人的方法,例如消息采样或可疑行为分析。然而,这些方法要求每个机器人采样网络中的每一个机器人,导致检测过程缓慢,降低团队绩效。本文介绍了一种方法,可以大幅缩短合法机器人识别动态环境中的恶意机器人的时间。我们的方法利用“动态众包鉴别”的概念,通过利用随机遇到的机器人的观察和trusted邻居机器人的意见,快速提高检测恶意机器人的准确性。关键的直觉是,只要每个合法机器人能够准确估计一些团队的合法性,可以从trusted邻居机器人获得的二手信息就足以纠正任何错误分类并提供其余团队的准确信任估计。我们证明可以将这个固定子集的大小视为基本图和随机行走属性的函数。此外,我们正式证明,当团队中的机器人数量增加时,检测时间保持不变。我们开发了一个闭合形式的表达式,用于说明实现我们算法所需的关键时间步数,以便将每个机器人的真实合法性成功识别为特定的失败概率。通过验证我们的理论结果,我们进行模拟实验,证明与以前不利用trusted邻居信息的工作相比,检测时间大大缩短。