The application of Federated Learning (FL) is steadily increasing, especially in privacy-aware applications, such as healthcare. However, its applications have been limited by security concerns due to various adversarial attacks, such as poisoning attacks (model and data poisoning). Such attacks attempt to poison the local models and data to manipulate the global models in order to obtain undue benefits and malicious use. Traditional methods of data auditing to mitigate poisoning attacks find their limited applications in FL because the edge devices never share their raw data directly due to privacy concerns, and are globally distributed with no insight into their training data. Thereafter, it is challenging to develop appropriate strategies to address such attacks and minimize their impact on the global model in federated learning. In order to address such challenges in FL, we proposed a novel framework to detect poisoning attacks using deep neural networks and support vector machines, in the form of anomaly without acquiring any direct access or information about the underlying training data of local edge devices. We illustrate and evaluate the proposed framework using different state of art poisoning attacks for two different healthcare applications: Electrocardiograph classification and human activity recognition. Our experimental analysis shows that the proposed method can efficiently detect poisoning attacks and can remove the identified poisoned updated from the global aggregation. Thereafter can increase the performance of the federated global.
翻译:联邦学习联合会(FL)的应用正在稳步增加,特别是在隐私意识的应用方面,如医疗保健等。然而,由于各种对抗性攻击,例如中毒袭击(模型和数据中毒),其应用由于安全方面的关切而受到限制。这种攻击企图毒害当地模型和数据,操纵全球模型,以获取不正当利益和恶意使用。减轻中毒袭击的传统数据审计方法发现,在FL的应用有限,因为边缘装置由于隐私关切从未直接分享其原始数据,而且在全球范围分布,对其培训数据没有洞察力。随后,制定适当战略应对此类袭击并尽量减少其对联邦化学习中全球模型的影响具有挑战性。为了应对FL的此类挑战,我们提议了一个新框架,利用深神经网络和辅助病媒机器来检测中毒袭击,其形式为异常,而没有直接获得任何关于当地边缘装置基本培训数据的信息。我们用电心学分类和人类活动认识两种不同的保健应用的不同状态来说明和评价拟议框架。我们进行实验分析后显示,拟议的方法能够有效检测中毒袭击,并消除全球已查明的毒性。