This paper introduces a decentralized state-dependent Markov chain synthesis method for probabilistic swarm guidance of a large number of autonomous agents to a desired steady-state distribution. The probabilistic swarm guidance approach is based on using a Markov chain that determines the transition probabilities of agents to transition from one state to another while satisfying prescribed transition constraints and converging to a desired steady-state distribution. Our main contribution is to develop a decentralized approach to the Markov chain synthesis that updates the underlying column stochastic Markov matrix as a function of the state, i.e., the current swarm probability distribution. Having a decentralized synthesis method eliminates the need to have complex communication architecture. Furthermore, the proposed method aims to cause a minimal number of state transitions to minimize resource usage while guaranteeing convergence to the desired distribution. It is also shown that the convergence rate is faster when compared with previously proposed methodologies.
翻译:本文介绍了一种分散的依赖于国家的Markov链式综合方法,用于对大量自主剂进行概率组合指导,使其达到理想的稳定分布。概率组合式指导方法的基础是使用一个Markov链式系统,该链式系统确定从一个国家向另一个国家过渡的过渡概率,同时满足规定的过渡限制,并与理想的稳定分布相融合。我们的主要贡献是制定对Markov链式综合方法的分散化方法,该方法更新了作为国家函数的Stochestic Markov 底列的随机分布,即目前的湿概率分布。如果采用分散化综合方法,则消除了建立复杂通信结构的必要性。此外,拟议方法的目的是在保证与预期分布一致的同时,尽量减少资源使用率的最低限度国家过渡。还表明,与先前提出的方法相比,趋同率更快。