Discrete-time discrete-state finite Markov chains are versatile mathematical models for a wide range of real-life stochastic processes. One of most common tasks in studies of Markov chains is computation of the stationary distribution. Without loss of generality, and drawing our motivation from applications to large networks, we interpret this problem as one of computing the stationary distribution of a random walk on a graph. We propose a new controlled, easily distributed algorithm for this task, briefly summarized as follows: at the beginning, each node receives a fixed amount of cash (positive or negative), and at each iteration, some nodes receive `green light' to distribute their wealth or debt proportionally to the transition probabilities of the Markov chain; the stationary probability of a node is computed as a ratio of the cash distributed by this node to the total cash distributed by all nodes together. Our method includes as special cases a wide range of known, very different, and previously disconnected methods including power iterations, Gauss-Southwell, and online distributed algorithms. We prove exponential convergence of our method, demonstrate its high efficiency, and derive scheduling strategies for the green-light, that achieve convergence rate faster than state-of-the-art algorithms.
翻译:离散的离散状态-离散的离散-状态的Markov 链链是用于一系列广泛实际生活随机过程的多功能数学模型。Markov 链系研究中最常见的一项最常见的任务是计算固定分布。在不丧失一般性的情况下,并将我们的动力从应用程序中吸引到大型网络中,我们将这一问题解释为计算图中随机行走的固定分布。我们为此任务提出了一个新的受控的、容易分配的算法,简要概述如下:在开始时,每个节点都收到固定数量的现金(正或负),在每一次循环中,一些节点得到“绿光”来按比例分配其财富或债务,以与马尔科夫链的过渡概率成比例;节点的固定概率被算为通过该节点分配的现金与所有节点一起分配的现金总额之间的比率。我们的方法包括一系列已知的、非常不同的和以前不相通的方法,包括权力的重复、计价-南价和在线分布的算法。我们证明了方法的指数趋同性趋同性,展示其效率高的趋同率,并计算出绿色速度,从而实现绿色趋同状态。