Trustworthy artificial intelligence (AI) technology has revolutionized daily life and greatly benefited human society. Among various AI technologies, Federated Learning (FL) stands out as a promising solution for diverse real-world scenarios, ranging from risk evaluation systems in finance to cutting-edge technologies like drug discovery in life sciences. However, challenges around data isolation and privacy threaten the trustworthiness of FL systems. Adversarial attacks against data privacy, learning algorithm stability, and system confidentiality are particularly concerning in the context of distributed training in federated learning. Therefore, it is crucial to develop FL in a trustworthy manner, with a focus on security, robustness, and privacy. In this survey, we propose a comprehensive roadmap for developing trustworthy FL systems and summarize existing efforts from three key aspects: security, robustness, and privacy. We outline the threats that pose vulnerabilities to trustworthy federated learning across different stages of development, including data processing, model training, and deployment. To guide the selection of the most appropriate defense methods, we discuss specific technical solutions for realizing each aspect of Trustworthy FL (TFL). Our approach differs from previous work that primarily discusses TFL from a legal perspective or presents FL from a high-level, non-technical viewpoint.
翻译:值得信赖的人工智能(AI)技术使日常生活发生革命,并极大地造福人类社会。在各种AI技术中,联邦学习组织(FL)作为各种现实世界情景的一个很有希望的解决办法,从金融风险评价系统到生命科学中药物发现等尖端技术,但围绕数据孤立和隐私的挑战威胁到FL系统的信任性。数据隐私、学习算法稳定性和系统保密的反攻击特别关系到在联邦学习的分布式培训中进行的数据隐私、学习算法稳定性和系统保密。因此,至关重要的是,以可靠的方式发展FL,重点是安全、稳健和隐私。在这次调查中,我们提出了开发可靠的FL系统的全面路线图,并从三个关键方面总结了现有的努力:安全、稳健和隐私。我们概述了在不同发展阶段,包括数据处理、示范培训和部署,对可信赖的联邦学习构成脆弱性的威胁。指导选择最适当的防御方法。我们讨论实现FL每一方面的具体技术解决办法。我们的方法不同于以前的工作,主要是从法律角度讨论TRFL,或从高层次介绍FL。