Federated optimization, wherein several agents in a network collaborate with a central server to achieve optimal social cost over the network with no requirement for exchanging information among agents, has attracted significant interest from the research community. In this context, agents demand resources based on their local computation. Due to the exchange of optimization parameters such as states, constraints, or objective functions with a central server, an adversary may infer sensitive information of agents. We develop LDP-AIMD, a local differentially-private additive-increase and multiplicative-decrease (AIMD) algorithm, to allocate multiple divisible shared resources to agents in a network. The LDP-AIMD algorithm provides a differential privacy guarantee to agents in the network. No inter-agent communication is required; however, the central server keeps track of the aggregate consumption of resources. We present experimental results to check the efficacy of the algorithm. Moreover, we present empirical analyses for the trade-off between privacy and the efficiency of the algorithm.
翻译:联邦优化是一种让网络中的多个代理与中央服务器协作以实现最优社会成本的方法,这种方法不要求代理之间相互交换信息。在此背景下,代理根据其本地计算需求资源。由于与中央服务器交换优化参数,比如状态、约束或目标函数,使得攻击者能够推断出代理的敏感信息。我们开发了 LDP-AIMD 算法,该算法是一种局部差分隐私的加性增加和乘性减少算法,用于为网络中的代理分配多个可分共享资源。LDP-AIMD 算法为网络中的代理提供差分隐私保障,不需要代理之间进行通信,但中央服务器要跟踪资源的总消耗量。我们提供实验结果以检查算法的效果,而且我们提供经验分析,以分析隐私和算法效率之间的权衡。