Federated Learning systems use a centralized server to aggregate model updates. This is a bandwidth and resource-heavy constraint and exposes the system to privacy concerns. We instead implement a peer to peer learning system in which nodes train on their own data and periodically perform a weighted average of their parameters with that of their peers according to a learned trust matrix. So far, we have created a model client framework and have been using this to run experiments on the proposed system using multiple virtual nodes which in reality exist on the same computer. We used this strategy as stated in Iteration 1 of our proposal to prove the concept of peer to peer learning with shared parameters. We now hope to run more experiments and build a more deployable real world system for the same.
翻译:联邦学习系统使用一个中央服务器来汇总模型更新。 这是一个带宽和资源重的制约因素,使系统暴露于隐私问题。 我们采用一个对等学习系统,节点在其中用自己的数据进行培训,并定期根据一个学习的信托矩阵,与对等点一起,对参数进行加权平均数。 到目前为止,我们建立了一个示范客户框架,并一直利用这个框架对拟议的系统进行实验,使用许多虚拟节点,而实际上,这些节点存在于同一个计算机上。我们用这个战略来证明同侪学习的概念和共同参数。我们现在希望进行更多的实验,并为同样的系统建立一个更可部署的真正世界系统。</s>