Federated Learning is a distributed machine learning framework designed for data privacy preservation i.e., local data remain private throughout the entire training and testing procedure. Federated Learning is gaining popularity because it allows one to use machine learning techniques while preserving privacy. However, it inherits the vulnerabilities and susceptibilities raised in deep learning techniques. For instance, Federated Learning is particularly vulnerable to data poisoning attacks that may deteriorate its performance and integrity due to its distributed nature and inaccessibility to the raw data. In addition, it is extremely difficult to correctly identify malicious clients due to the non-Independently and/or Identically Distributed (non-IID) data. The real-world data can be complex and diverse, making them hardly distinguishable from the malicious data without direct access to the raw data. Prior research has focused on detecting malicious clients while treating only the clients having IID data as benign. In this study, we propose a method that detects and classifies anomalous clients from benign clients when benign ones have non-IID data. Our proposed method leverages feature dimension reduction, dynamic clustering, and cosine similarity-based clipping. The experimental results validates that our proposed method not only classifies the malicious clients but also alleviates their negative influences from the entire procedure. Our findings may be used in future studies to effectively eliminate anomalous clients when building a model with diverse data.
翻译:联邦学习组织是一个分布式的机器学习框架,目的是保护数据隐私,即,在整个培训和测试过程中,当地数据仍然是保密的; 联邦学习组织越来越受欢迎,因为它允许人们使用机器学习技术,同时保护隐私; 然而,它继承了深层学习技术中产生的弱点和概念; 例如,联邦学习组织特别容易受到数据中毒袭击的伤害,因为其分布性质和难以获得原始数据,可能恶化其业绩和完整性; 此外,由于非独立和(或)同一分布(非IID)数据,因此很难正确识别恶意客户; 实际世界数据可能复杂多样,无法在不直接接触原始数据的情况下将其与恶意数据区分开来; 先前的研究侧重于检测恶意客户,而只将拥有ID数据的客户视为良性,因此我们建议的方法是,当良性客户拥有非IID数据时,从良性客户中检测和分类异常客户。 我们的拟议方法利用了减少特征、动态组合和共同构建相似性(非IID)数据的方法, 使得它们无法与恶意数据区分数据; 先前的研究侧重于检测恶意分析程序,我们提出的最终结果只能消除整个客户。