In this paper, we present a reinforcement learning approach to designing a control policy for a "leader'' agent that herds a swarm of "follower'' agents, via repulsive interactions, as quickly as possible to a target probability distribution over a strongly connected graph. The leader control policy is a function of the swarm distribution, which evolves over time according to a mean-field model in the form of an ordinary difference equation. The dependence of the policy on agent populations at each graph vertex, rather than on individual agent activity, simplifies the observations required by the leader and enables the control strategy to scale with the number of agents. Two Temporal-Difference learning algorithms, SARSA and Q-Learning, are used to generate the leader control policy based on the follower agent distribution and the leader's location on the graph. A simulation environment corresponding to a grid graph with 4 vertices was used to train and validate the control policies for follower agent populations ranging from 10 to 100. Finally, the control policies trained on 100 simulated agents were used to successfully redistribute a physical swarm of 10 small robots to a target distribution among 4 spatial regions.
翻译:在本文中,我们展示了一种强化学习方法,用于设计“领导者”代理人的控制政策,该代理人通过令人厌恶的相互作用,尽可能快地将“追随者”代理人的群落聚集到一个目标概率分布点上,在紧密相连的图表上,领导者控制政策是群发分布的函数,根据一个普通差异方程式形式的平均场模式,这种分布随着时间演变。该政策对每个图形脊椎层的代理人群的依赖性,而不是对单个代理人活动的依赖性,简化了领导者所要求的观察,并使控制战略能够与代理人的数量相适应。根据追随者代理人分布点和领导人在图表上的位置,使用了两种时间差异率差异分析法,产生了领导者控制政策。一个模拟环境与一个有4个脊椎的网图相对应,用于培训和验证对10个至100个跟踪剂群的管制政策,对100个模拟代理人进行了培训的控制政策被用来成功地将10个小机器人的物理群重新配置到4个区域之间的空间分布点。