We develop a privacy-preserving distributed projection least mean squares (LMS) strategy over linear multitask networks, where agents' local parameters of interest or tasks are linearly related. Each agent is interested in not only improving its local inference performance via in-network cooperation with neighboring agents, but also protecting its own individual task against privacy leakage. In our proposed strategy, at each time instant, each agent sends a noisy estimate, which is its local intermediate estimate corrupted by a zero-mean additive noise, to its neighboring agents. We derive a sufficient condition to determine the amount of noise to add to each agent's intermediate estimate to achieve an optimal trade-off between the network mean-square-deviation and an inference privacy constraint. We propose a distributed and adaptive strategy to compute the additive noise powers, and study the mean and mean-square behaviors and privacy-preserving performance of the proposed strategy. Simulation results demonstrate that our strategy is able to balance the trade-off between estimation accuracy and privacy preservation.
翻译:在线性多任务网络中,我们制定了一个隐私分布预测最小值平方(LMS)战略,在线性多任务网络中,代理商的当地利益参数或任务线性相关。每个代理商不仅有兴趣通过与邻邦代理商的网络合作提高本地的推论性能,而且还希望保护其自身不受隐私泄漏的影响。在我们的拟议战略中,每个代理商每次都向其邻邦代理商发出一个噪音估计,即当地中间估计值被零度添加噪音腐蚀。我们有足够的条件确定每个代理商的中间估计值的噪音数量,以便实现网络平均度缩放和引用隐私权限制之间的最佳权衡。我们提出了一个分散性和适应性战略,以计算添加噪音能力,研究拟议战略的平均值和平均值行为以及隐私保护性表现。模拟结果表明,我们的战略能够平衡估算准确度与隐私保护之间的权衡。