Due to the veracity and heterogeneity in network traffic, detecting anomalous events is challenging. The computational load on global servers is a significant challenge in terms of efficiency, accuracy, and scalability. Our primary motivation is to introduce a robust and scalable framework that enables efficient network anomaly detection. We address the issue of scalability and efficiency for network anomaly detection by leveraging federated learning, in which multiple participants train a global model jointly. Unlike centralized training architectures, federated learning does not require participants to upload their training data to the server, preventing attackers from exploiting the training data. Moreover, most prior works have focused on traditional centralized machine learning, making federated machine learning under-explored in network anomaly detection. Therefore, we propose a deep neural network framework that could work on low to mid-end devices detecting network anomalies while checking if a request from a specific IP address is malicious or not. Compared to multiple traditional centralized machine learning models, the deep neural federated model reduces training time overhead. The proposed method performs better than baseline machine learning techniques on the UNSW-NB15 data set as measured by experiments conducted with an accuracy of 97.21% and a faster computation time.
翻译:由于网络交通的真实性和异质性,检测异常事件具有挑战性。全球服务器的计算负荷在效率、准确性和可缩放性方面是一个重大挑战。我们的主要动机是引入一个强大和可扩缩的框架,以便能够有效地检测网络异常现象。我们通过利用联合学习来解决网络异常现象检测的可缩放性和效率问题,在这种学习中,多个参与者联合培训一个全球模型。与集中化的培训结构不同,联合学习并不要求参与者将其培训数据上传到服务器,防止袭击者利用培训数据。此外,大多数先前的工作都侧重于传统的中央集成机器学习,在网络异常探测中,使联合集成的机器学习在探索中进行。因此,我们提议了一个深线性网络框架,可以在低端到中端的装置上工作,检测网络异常现象,同时检查具体IP地址的请求是否恶意或不恶意。与多个传统的中央集成机器学习模型相比,深层神经节制模型减少了培训时间的间接负担。拟议方法比UNSW-NB15数据的基准机学习技术要好,而通过快速的实验测算来测算。</s>