We study the problem of estimating an unknown parameter in a distributed and online manner. Existing work on distributed online learning typically either focuses on asymptotic analysis, or provides bounds on regret. However, these results may not directly translate into bounds on the error of the learned model after a finite number of time-steps. In this paper, we propose a distributed online estimation algorithm which enables each agent in a network to improve its estimation accuracy by communicating with neighbors. We provide non-asymptotic bounds on the estimation error, leveraging the statistical properties of the underlying model. Our analysis demonstrates a trade-off between estimation error and communication costs. Further, our analysis allows us to determine a time at which the communication can be stopped (due to the costs associated with communications), while meeting a desired estimation accuracy. We also provide a numerical example to validate our results.
翻译:我们研究的是以分布式和在线方式估算未知参数的问题。关于分布式在线学习的现有工作通常侧重于无症状分析,或者提供遗憾的界限。然而,这些结果可能不会直接转化为在一定时间步骤之后学习模式错误的界限。在本文中,我们提议一个分布式在线估算算法,使网络中的每个代理商能够通过与邻居沟通来提高其估计准确性。我们为估算错误提供了非无症状的界限,利用了基本模型的统计属性。我们的分析表明估算错误与通信成本之间的权衡。此外,我们的分析使我们能够确定通信在何时可以停止(由于通信的成本),同时达到预期的估计准确性。我们还提供了一个数字范例来验证我们的结果。