Advances in Machine Learning (ML) and its wide range of applications boosted its popularity. Recent privacy awareness initiatives as the EU General Data Protection Regulation (GDPR) - European Parliament and Council Regulation No 2016/679, subdued ML to privacy and security assessments. Federated Learning (FL) grants a privacy-driven, decentralized training scheme that improves ML models' security. The industry's fast-growing adaptation and security evaluations of FL technology exposed various vulnerabilities. Depending on the FL phase, i.e., training or inference, the adversarial actor capabilities, and the attack type threaten FL's confidentiality, integrity, or availability (CIA). Therefore, the researchers apply the knowledge from distinct domains as countermeasures, like cryptography and statistics. This work assesses the CIA of FL by reviewing the state-of-the-art (SoTA) for creating a threat model that embraces the attack's surface, adversarial actors, capabilities, and goals. We propose the first unifying taxonomy for attacks and defenses by applying this model. Additionally, we provide critical insights extracted by applying the suggested novel taxonomies to the SoTA, yielding promising future research directions.
翻译:近来的隐私意识举措,如欧盟一般数据保护条例(GDPR) -- -- 欧洲议会和理事会第2016/679号条例,将ML用于隐私和安全评估; 联邦学习(FL)提供一项由隐私驱动、分散化的培训计划,改善ML模型的安全; 行业对FL技术的快速适应和安全评估暴露了各种脆弱性; 取决于FL阶段,即培训或推断、敌对行为者的能力和攻击类型,威胁到FL的保密性、完整性或可用性(CIA)。 因此,研究人员将不同领域的知识用作对策,如密码学和统计学。 这项工作通过审查最新技术(SoTA)创建包含攻击表面、对抗行为者、能力和目标的威胁模型,评估了FL公司的未来研究方向。 我们建议应用这一模型,对攻击和防御进行首次统一分类。 此外,我们通过对SoTA应用建议的新税项,提供了关键见解。