We consider the problem of predicting cellular network performance (signal maps) from measurements collected by several mobile devices. We formulate the problem within the online federated learning framework: (i) federated learning (FL) enables users to collaboratively train a model, while keeping their training data on their devices; (ii) measurements are collected as users move around over time and are used for local training in an online fashion. We consider an honest-but-curious server, who observes the updates from target users participating in FL and infers their location using a deep leakage from gradients (DLG) type of attack, originally developed to reconstruct training data of DNN image classifiers. We make the key observation that a DLG attack, applied to our setting, infers the average location of a batch of local data, and can thus be used to reconstruct the target users' trajectory at a coarse granularity. We show that a moderate level of privacy protection is already offered by the averaging of gradients, which is inherent to Federated Averaging. Furthermore, we propose an algorithm that devices can apply locally to curate the batches used for local updates, so as to effectively protect their location privacy without hurting utility. Finally, we show that the effect of multiple users participating in FL depends on the similarity of their trajectories. To the best of our knowledge, this is the first study of DLG attacks in the setting of FL from crowdsourced spatio-temporal data.
翻译:我们考虑从几个移动设备收集的测量中预测蜂窝网络性能(信号地图)的问题。我们在在线联合学习框架内提出问题:(一) 联合学习(FL)使用户能够合作培训模型,同时保留其设备的培训数据;(二) 测量是随着用户的移动而收集的,并用于在线的地方培训。我们考虑一个诚实但有说服力的服务器,该服务器观察参加FL的目标用户提供的最新情况,并利用从梯度类型攻击中深度渗漏(DLG)来推断其位置。我们提出一个主要观察,即DLG攻击可以合作训练DNN图像分类者的数据,在我们的设置中应用DL攻击,推断一批当地数据的平均位置,从而可以用来在粗糙的颗粒状态下重建目标用户的轨迹。我们展示了一种适度的隐私保护水平,这是FL的内在特性。我们提出一种算法,即设备可以首先用于翻译DNNE图像分类中的一组数据,用于我们的设置。我们最后使用的FL攻击的保密性数据会影响着F的本地用户。