This paper studies the problem of distributed classification with a network of heterogeneous agents. The agents seek to jointly identify the underlying target class that best describes a sequence of observations. The problem is first abstracted to a hypothesis-testing framework, where we assume that the agents seek to agree on the hypothesis (target class) that best matches the distribution of observations. Non-Bayesian social learning theory provides a framework that solves this problem in an efficient manner by allowing the agents to sequentially communicate and update their beliefs for each hypothesis over the network. Most existing approaches assume that agents have access to exact statistical models for each hypothesis. However, in many practical applications, agents learn the likelihood models based on limited data, which induces uncertainty in the likelihood function parameters. In this work, we build upon the concept of uncertain models to incorporate the agents' uncertainty in the likelihoods by identifying a broad set of parametric distribution that allows the agents' beliefs to converge to the same result as a centralized approach. Furthermore, we empirically explore extensions to non-parametric models to provide a generalized framework of uncertain models in non-Bayesian social learning.
翻译:本文研究与不同物剂网络的分布分类问题。 代理人寻求共同确定最能描述观测顺序的基本目标类别。 这个问题首先被抽象到假设测试框架, 我们假设代理人寻求就最适合观测分布的假设( 目标类别)达成一致。 非巴伊西亚社会学习理论提供了一个有效解决这一问题的框架, 允许代理人对网络的每个假设进行顺序沟通并更新其信念。 多数现有办法假定代理人能够为每个假设获得精确的统计模型。 但是,在许多实际应用中, 代理人学习基于有限数据的可能性模型, 从而导致可能性功能参数的不确定性。 在这项工作中, 我们以不确定模型的概念为基础, 将代理人的不确定性纳入可能性中, 确定一套广泛的参数分布, 使代理人的信念与集中方法相同的结果相融合。 此外, 我们从经验上探索了非参数模型的延伸, 以提供非巴耶斯社会学习不确定模型的普遍框架。