We consider the problem of predicting cellular network performance (signal maps) from measurements collected by several mobile devices. We formulate the problem within the online federated learning framework: (i) federated learning (FL) enables users to collaboratively train a model, while keeping their training data on their devices; (ii) measurements are collected as users move around over time and are used for local training in an online fashion. We consider an honest-but-curious server, who observes the updates from target users participating in FL and infers their location using a deep leakage from gradients (DLG) type of attack, originally developed to reconstruct training data of DNN image classifiers. We make the key observation that a DLG attack, applied to our setting, infers the average location of a batch of local data, and can thus be used to reconstruct the target users' trajectory at a coarse granularity. We build on this observation to protect location privacy, in our setting, by revisiting and designing mechanisms within the federated learning framework including: tuning the FL parameters for averaging, curating local batches so as to mislead the DLG attacker, and aggregating across multiple users with different trajectories. We evaluate the performance of our algorithms through both analysis and simulation based on real-world mobile datasets, and we show that they achieve a good privacy-utility tradeoff.
翻译:我们考虑从若干移动设备收集的测量中预测蜂窝网络性能(信号地图)的问题。我们在网上联合学习框架内提出问题:(一) 联盟学习(FL)使用户能够合作培训模型,同时保留其设备的培训数据;(二) 测量是随着用户在时间上移动而收集的,并用于在线培训。我们考虑一个诚实但充满怀疑的服务器,该服务器观察参加FL的目标用户的最新情况,并利用从梯度攻击类型(DLG)中深度渗漏来推断其位置,最初开发该类型攻击是为了重建DNN图像分类者的培训数据。我们的主要观察意见是,DLG攻击可以合作培训模型,同时在我们的设置中应用DLG攻击,推算出一批当地数据的平均位置,从而可以用来在粗糙的颗粒状态下重建目标用户的轨迹。我们以这种观察为基础,通过重新审视和设计联邦的学习框架内的定位隐私机制,包括:调整FL参数用于平均、整理本地成批量数据,从而误导DLG攻击用户的多重分析。我们通过不同变式的模拟分析,通过Sqlalal-Cal-Calations</s>