Decentralized learning is an efficient emerging paradigm for boosting the computing capability of multiple bounded computing agents. In the big data era, performing inference within the distributed and federated learning (DL and FL) frameworks, the central server needs to process a large amount of data while relying on various agents to perform multiple distributed training tasks. Considering the decentralized computing topology, privacy has become a first-class concern. Moreover, assuming limited information processing capability for the agents calls for a sophisticated \textit{privacy-preserving decentralization} that ensures efficient computation. Towards this end, we study the \textit{privacy-aware server to multi-agent assignment} problem subject to information processing constraints associated with each agent, while maintaining the privacy and assuring learning informative messages received by agents about a global terminal through the distributed private federated learning (DPFL) approach. To find a decentralized scheme for a two-agent system, we formulate an optimization problem that balances privacy and accuracy, taking into account the quality of compression constraints associated with each agent. We propose an iterative converging algorithm by alternating over self-consistent equations. We also numerically evaluate the proposed solution to show the privacy-prediction trade-off and demonstrate the efficacy of the novel approach in ensuring privacy in DL and FL.
翻译:分散化学习是提高多约束计算代理器计算能力的一个高效的新模式。 在大数据时代,在分布式和联合学习(DL和FL)框架内进行推论,中央服务器需要处理大量数据,同时依靠不同代理器执行多种分散式培训任务。考虑到分散式计算机地形学,隐私已经成为一个头等关注问题。此外,假设代理商的信息处理能力有限,这就要求有一个精密的\textit{privacy-previcive droit}来保证高效计算。为此,我们研究与每个代理商相关的Textit{privacy-aware服务器作为多代理商任务* 问题,但须受信息处理限制,同时维护隐私,确保代理商通过分散式私营联合学习(DPFLL)方法获得有关全球终端的信息信息。为了为两个代理商系统找到一个分散式的计划,我们考虑到每个代理商的压缩限制的质量,制定了一个最优化的问题。我们建议通过在隐私自我一致式方程式中互换的相互连接式算法,我们还从数字上评估了保密性法的更新版本。