We investigate the rate distortion tradeoff in private read update write (PRUW) in relation to federated submodel learning (FSL). In FSL a machine learning (ML) model is divided into multiple submodels based on different types of data used for training. Each user only downloads and updates the submodel relevant to its local data. The process of downloading and updating the required submodel while guaranteeing privacy of the submodel index and the values of updates is known as PRUW. In this work, we study how the communication cost of PRUW can be reduced when a pre-determined amount of distortion is allowed in the reading (download) and writing (upload) phases. We characterize the rate distortion tradeoff in PRUW along with a scheme that achieves the lowest communication cost while working under a given distortion budget.
翻译:在FSL中,机器学习模式(ML)根据不同种类的培训数据分为多个子模型,每个用户只下载和更新与其当地数据有关的子模型,下载和更新所需的子模型的过程,同时保障子模型索引的隐私和更新值。在这项工作中,我们研究在允许在阅读(下载)和书写(上载)阶段预先确定一定数量的扭曲时,如何降低PRUW的通信成本。我们描述PRUW的汇率扭曲交易,以及一项在特定扭曲预算下实现最低通信成本的计划。