Many problems in machine learning rely on multi-task learning (MTL), in which the goal is to solve multiple related machine learning tasks simultaneously. MTL is particularly relevant for privacy-sensitive applications in areas such as healthcare, finance, and IoT computing, where sensitive data from multiple, varied sources are shared for the purpose of learning. In this work, we formalize notions of task-level privacy for MTL via joint differential privacy(JDP), a relaxation of differential privacy for mechanism design and distributed optimization. We then propose an algorithm for mean-regularized MTL, an objective commonly used for applications in personalized federated learning, subject to JDP. We analyze our objective and solver, providing certifiable guarantees on both privacy and utility. Empirically, we find that our method allows for improved privacy/utility trade-offs relative to global baselines across common federated learning benchmarks.
翻译:机器学习的许多问题依赖于多任务学习(MTL),其目标是同时解决多重相关机器学习任务(MTL),MTL对于保健、金融和IoT计算等领域的隐私敏感应用特别相关,这些领域为学习目的共享来自多种来源的敏感数据。在这项工作中,我们正式确定了MTL的任务层面隐私概念,即通过共同的差别隐私(JDP),放宽机制设计和分配优化的不同隐私差异。然后,我们提出了平均正规化的MTL的算法,这是个人化联合学习应用中通常使用的一个目标。我们分析了我们的目标和解答器,为隐私和实用性提供了可验证的保证。我们发现,我们的方法可以改善隐私/效用在共同的联邦学习基准之间相对于全球基线的权衡。