Federated Recommendation can mitigate the systematical privacy risks of traditional recommendation since it allows the model training and online inferring without centralized user data collection. Most existing works assume that all user devices are available and adequate to participate in the Federated Learning. However, in practice, the complex recommendation models designed for accurate prediction and massive item data cause a high computation and communication cost to the resource-constrained user device, resulting in poor performance or training failure. Therefore, how to effectively compress the computation and communication overhead to achieve efficient federated recommendations across ubiquitous mobile devices remains a significant challenge. This paper introduces split learning into the two-tower recommendation models and proposes STTFedRec, a privacy-preserving and efficient cross-device federated recommendation framework. STTFedRec achieves local computation reduction by splitting the training and computation of the item model from user devices to a performance-powered server. The server with the item model provides low-dimensional item embeddings instead of raw item data to the user devices for local training and online inferring, achieving server broadcast compression. The user devices only need to perform similarity calculations with cached user embeddings to achieve efficient online inferring. We also propose an obfuscated item request strategy and multi-party circular secret sharing chain to enhance the privacy protection of model training. The experiments conducted on two public datasets demonstrate that STTFedRec improves the average computation time and communication size of the baseline models by about 40 times and 42 times in the best-case scenario with balanced recommendation accuracy.
翻译:联邦建议可以减少传统建议的系统隐私风险,因为它允许在不集中收集用户数据的情况下进行模式培训和在线推介,因此传统建议的系统隐私风险可以减少,因为传统建议允许在不集中收集用户数据的情况下进行模拟培训和在线推介;大多数现有工作假设所有用户设备都可用,足以参加联邦学习联合会;然而,在实践中,为准确预测和大量项目数据设计的复杂建议模型导致资源限制用户设备的计算和通信成本高,导致业绩不佳或培训失败;因此,如何有效地压缩计算和通信间接费用,以便在无处不在的移动设备之间实现高效的联结建议,这仍然是一项重大挑战;本文件引入了二至更准确的建议模式的分解学习,并提出了STTFedRec,这是一个维护隐私和高效交叉联动的建议框架;但是,STTFedRec通过将培训和计算项目模型从受资源限制的用户系统设备设备装置分成一个不同的计算和计算方法,从而通过本地培训和在线推导,实现服务器广播压缩。