We consider a distributed learning setting where each agent/learner holds a specific parametric model and data source. The goal is to integrate information across a set of learners to enhance the prediction accuracy of a given learner. A natural way to integrate information is to build a joint model across a group of learners that shares common parameters of interest. However, the underlying parameter sharing patterns across a set of learners may not be a priori known. Misspecifying the parameter sharing patterns or the parametric model for each learner often yields a biased estimation and degrades the prediction accuracy. We propose a general method to integrate information across a set of learners that is robust against misspecifications of both models and parameter sharing patterns. The main crux is to sequentially incorporate additional learners that can enhance the prediction accuracy of an existing joint model based on user-specified parameter sharing patterns across a set of learners. Theoretically, we show that the proposed method can data-adaptively select the most suitable way of parameter sharing and thus enhance the predictive performance of any particular learner of interest. Extensive numerical studies show the promising performance of the proposed method.
翻译:我们考虑一个分布式学习设置,让每个代理商/学习者拥有一个特定的参数模型和数据源。目标是将一组学习者的信息整合在一起,以提高特定学习者的预测准确性。一种自然的信息整合方法是在一组共享共同兴趣参数的学习者中建立一个联合模型。然而,一组学习者之间潜在的参数共享模式可能不是事先知道的。对每个学习者参数共享模式或参数参数参数模型的描述错误,往往得出偏差的估计,并降低预测准确性。我们提出了一个总体方法,将一组学习者中的信息整合在一起,这组学习者对模型和参数共享模式的误差都非常有力。主要的轮廓是按顺序吸收更多的学习者,这些学习者能够提高基于一组学习者用户指定参数共享模式的现有联合模型的预测准确性。理论上,我们表明,拟议的方法能够以数据适应方式选择最合适的参数共享方式,从而提高任何特定学习者的预测性能。广泛的数字研究显示了拟议方法的前景。