In distributed settings, collaborations between different entities, such as financial institutions, medical centers, and retail markets, are crucial to providing improved service and performance. However, the underlying entities may have little interest in sharing their private data, proprietary models, and objective functions. These privacy requirements have created new challenges for collaboration. In this work, we propose Gradient Assisted Learning (GAL), a new method for various entities to assist each other in supervised learning tasks without sharing data, models, and objective functions. In this framework, all participants collaboratively optimize the aggregate of local loss functions, and each participant autonomously builds its own model by iteratively fitting the gradients of the objective function. Experimental studies demonstrate that Gradient Assisted Learning can achieve performance close to centralized learning when all data, models, and objective functions are fully disclosed.
翻译:在分布式环境中,金融机构、医疗中心和零售市场等不同实体之间的合作对于提供更好的服务和业绩至关重要,然而,基础实体对分享其私人数据、专有模式和客观功能可能没有多大兴趣,这些隐私要求给合作带来了新的挑战。在这项工作中,我们建议采用渐进辅助学习(GAL)这一新方法,供各实体在不分享数据、模型和客观功能的情况下相互协助执行监督的学习任务。在这个框架内,所有参与者协作优化当地损失功能的总合,每个参与者通过迭接目标函数的梯度自行建立自己的模型。实验研究表明,在充分披露所有数据、模型和客观功能时,渐进辅助学习能够接近集中学习的绩效。