Machine learning (ML) is expected to play a major role in 5G edge computing. Various studies have demonstrated that ML is highly suitable for optimizing edge computing systems as rapid mobility and application-induced changes occur at the edge. For ML to provide the best solutions, it is important to continually train the ML models to include the changing scenarios. The sudden changes in data distributions caused by changing scenarios (e.g., 5G base station failures) is referred to as concept drift and is a major challenge to continual learning. The ML models can present high error rates while the drifts take place and the errors decrease only after the model learns the distributions. This problem is more pronounced in a distributed setting where multiple ML models are being used for different heterogeneous datasets and the final model needs to capture all concept drifts. In this paper, we show that using Attention in Federated Learning (FL) is an efficient way of handling concept drifts. We use a 5G network traffic dataset to simulate concept drift and test various scenarios. The results indicate that Attention can significantly improve the concept drift handling capability of FL.
翻译:机器学习(ML)预计将在5G边缘计算中发挥重要作用。 各种研究表明, ML非常适合优化边缘计算系统, 因为在边缘发生快速移动和应用引起的变化。 对于 ML 提供最佳解决方案来说, 需要不断培训 ML 模型以纳入不断变化的情景。 变化情景( 如 5G 基站失败) 导致的数据分布突变被称为概念漂移, 是持续学习的一大挑战。 ML 模型可以显示高误差率, 而漂移发生时和错误减少时, 只有在模型了解分布后才能进行。 这个问题在分布式环境中更为突出, 多个 ML 模型被用于不同的混杂数据集, 最后模型需要捕捉所有概念漂移。 在本文中, 我们表明, 使用FL 学习中的注意( FL) 是处理概念流动的有效方法。 我们使用 5G 网络流量数据集模拟概念漂移并测试各种情景。 结果表明, 注意可以显著改善 FLL 的概念流处理能力 。