This paper studies probabilistic rates of convergence for consensus+innovations type of algorithms in random, generic networks. For each node, we find a lower and also a family of upper bounds on the large deviations rate function, thus enabling the computation of the exponential convergence rates for the events of interest on the iterates. Relevant applications include error exponents in distributed hypothesis testing, rates of convergence of beliefs in social learning, and inaccuracy rates in distributed estimation. The bounds on the rate function have a very particular form at each node: they are constructed as the convex envelope between the rate function of the hypothetical fusion center and the rate function corresponding to a certain topological mode of the node's presence. We further show tightness of the discovered bounds for several cases, such as pendant nodes and regular networks, thus establishing the first proof of the large deviations principle for consensus+innovations and social learning in random networks.
翻译:本文研究了随机通用网络中协商一致+创新型算法类型集成概率的概率。 对于每个节点,我们发现在大型偏差率函数上,一个较低、也有一个上界的组合,从而能够计算迭代中感兴趣的事件的指数趋同率。 相关的应用包括分布式假设测试中的误差指数、社会学习中信仰趋同率和分布式估计中的不准确率。 比率函数的界限在每个节点都有非常特殊的形式:它们被构建为假设聚变中心比率函数与节点存在某种顶层模式对应的费率函数之间的二次曲线包。 我们进一步显示了若干案例(例如笔节点和常规网络)所发现界限的紧凑性,从而确立了在随机网络中形成共识+创新和社会学习的大偏差原则的第一个证据。