Fast identification of new network attack patterns is crucial for improving network security. Nevertheless, identifying an ongoing attack in a heterogeneous network is a non-trivial task. Federated learning emerges as a solution to collaborative training for an Intrusion Detection System (IDS). The federated learning-based IDS trains a global model using local machine learning models provided by federated participants without sharing local data. However, optimization challenges are intrinsic to federated learning. This paper proposes the Federated Simulated Annealing (FedSA) metaheuristic to select the hyperparameters and a subset of participants for each aggregation round in federated learning. FedSA optimizes hyperparameters linked to the global model convergence. The proposal reduces aggregation rounds and speeds up convergence. Thus, FedSA accelerates learning extraction from local models, requiring fewer IDS updates. The proposal assessment shows that the FedSA global model converges in less than ten communication rounds. The proposal requires up to 50% fewer aggregation rounds to achieve approximately 97% accuracy in attack detection than the conventional aggregation approach.
翻译:快速识别新的网络攻击模式对于改善网络安全至关重要。然而,在混合网络中确定持续袭击是一个非三重任务。联邦学习是入侵探测系统合作培训的一个解决方案。联邦学习基础的IDS利用联合会参与者提供的当地机器学习模型培训一个全球模型,而无需分享当地数据。不过,优化是联邦学习的内在挑战。本文件提议联邦模拟安娜(FedSA)计量学选择超参数和每轮联合学习的一组参与者。联邦空间局优化了与全球模式趋同相联系的超参数。因此,联邦空间局加快了从当地模型中学习的提取速度,要求更少的IDS更新。提案评估显示,联邦空间局全球模型以不到十轮通信回合的速度聚合。为了在攻击探测中达到约97%的精度,联邦空间局全球模型需要比常规汇总方法要少50%的集子弹。