We investigate the problem of private read update write (PRUW) in federated submodel learning (FSL) with sparsification. In FSL, a machine learning model is divided into multiple submodels, where each user updates only the submodel that is relevant to the user's local data. PRUW is the process of privately performing FSL by reading from and writing to the required submodel without revealing the submodel index, or the values of updates to the databases. Sparsification is a widely used concept in learning, where the users update only a small fraction of parameters to reduce the communication cost. Revealing the coordinates of these selected (sparse) updates leaks privacy of the user. We show how PRUW in FSL can be performed with sparsification. We propose a novel scheme which privately reads from and writes to arbitrary parameters of any given submodel, without revealing the submodel index, values of updates, or the coordinates of the sparse updates, to databases. The proposed scheme achieves significantly lower reading and writing costs compared to what is achieved without sparsification.
翻译:在FSL中,机器学习模型分为多个子模型,每个用户只能更新与用户本地数据相关的子模型。PRUW是私人执行FSL的过程,从和书写到所需的子模型,而不透露子模型索引或数据库更新值。Sparsification是一个广泛使用的学习概念,用户仅更新一小部分参数,以减少通信成本。解析这些选定的(Sparse)更新的坐标会泄露用户的隐私。我们展示FSL中PRUW是如何用垃圾化操作的。我们提出了一个新方案,在不披露子模型索引、更新值或零星更新的坐标的情况下,私下阅读和撰写给数据库的任意参数,而没有披露子模型索引、更新值或数据库的坐标。拟议办法的读写成本比没有垃圾化的实现的成本要低得多。