Social learning strategies enable agents to infer the underlying true state of nature in a distributed manner by receiving private environmental signals and exchanging beliefs with their neighbors. Previous studies have extensively focused on static environments, where the underlying true state remains unchanged over time. In this paper, we consider a dynamic setting where the true state evolves according to a Markov chain with equal exit probabilities. Based on this assumption, we present a social learning strategy for dynamic environments, termed Diffusion $\alpha$-HMM. By leveraging a simplified parameterization, we derive a nonlinear dynamical system that governs the evolution of the log-belief ratio over time. This formulation further reveals the relationship between the linearized form of Diffusion $\alpha$-HMM and Adaptive Social Learning, a well-established social learning strategy for dynamic environments. Furthermore, we analyze the convergence and fixed-point properties of a reference system, providing theoretical guarantees on the learning performance of the proposed algorithm in dynamic settings. Numerical experiments compare various distributed social learning strategies across different dynamic environments, demonstrating the impact of nonlinearity and parameterization on learning performance in a range of dynamic scenarios.
翻译:暂无翻译