News recommendation is critical for personalized news access. Most existing news recommendation methods rely on centralized storage of users' historical news click behavior data, which may lead to privacy concerns and hazards. Federated Learning is a privacy-preserving framework for multiple clients to collaboratively train models without sharing their private data. However, the computation and communication cost of directly learning many existing news recommendation models in a federated way are unacceptable for user clients. In this paper, we propose an efficient federated learning framework for privacy-preserving news recommendation. Instead of training and communicating the whole model, we decompose the news recommendation model into a large news model maintained in the server and a light-weight user model shared on both server and clients, where news representations and user model are communicated between server and clients. More specifically, the clients request the user model and news representations from the server, and send their locally computed gradients to the server for aggregation. The server updates its global user model with the aggregated gradients, and further updates its news model to infer updated news representations. Since the local gradients may contain private information, we propose a secure aggregation method to aggregate gradients in a privacy-preserving way. Experiments on two real-world datasets show that our method can reduce the computation and communication cost on clients while keep promising model performance.
翻译:对个人化新闻访问来说,新闻建议至关重要。大多数现有新闻建议方法都依靠集中储存用户的历史新闻点击行为数据,这可能导致隐私问题和危险。联邦学习是一个隐私保护框架,让多个客户合作培训模型,而不分享其私人数据。然而,直接以联合方式直接学习许多现有新闻建议模式的计算和通信成本对于用户客户来说是不可接受的。在本文件中,我们提议一个高效的联邦化学习框架,用于保护隐私的新闻建议。我们不培训和传达整个模式,而是将新闻建议模式分解成一个在服务器上维持的大型新闻模式和在服务器和客户上共享的轻量用户模式,在服务器和客户之间进行新闻展示和用户模型的交流。更具体地说,客户要求用户模型和用户演示,并将其本地计算的梯度发送给服务器,供用户用户用户汇总使用。服务器用综合梯度更新其全球用户模式,并进一步更新其新闻模型,以便推断最新的新闻表述。由于本地梯度可能包含私人信息,我们建议一种安全的汇总方法,在服务器和客户之间共享的服务器和用户间共享一个轻度用户模式,在服务器和用户间进行新闻表达。更安全性化的模型,同时,实验能显示真实成本的计算。