6G is the next generation for the communication systems. In recent years, machine learning algorithms have been applied widely in various fields such as health, transportation, and the autonomous car. The predictive algorithms will be used in 6G problems. With the rapid developments of deep learning techniques, it is critical to take the security concern into account to apply the algorithms. While machine learning offers significant advantages for 6G, AI models' security is ignored. Since it has many applications in the real world, security is a vital part of the algorithms. This paper has proposed a mitigation method for adversarial attacks against proposed 6G machine learning models for the millimeter-wave (mmWave) beam prediction with adversarial learning. The main idea behind adversarial attacks against machine learning models is to produce faulty results by manipulating trained deep learning models for 6G applications for mmWave beam prediction use case. We have also presented the adversarial learning mitigation method's performance for 6G security in millimeter-wave beam prediction application with fast gradient sign method attack. The mean square errors of the defended model and undefended model are very close.
翻译:6G是通信系统的下一代。 近年来,机器学习算法在卫生、交通和自主汽车等不同领域广泛应用。预测算法将用于6G问题。随着深层次学习技术的迅速发展,关键是要将安全关切考虑在内,以应用算法。虽然机器学习为6G带来巨大优势,但AI模型的安全却被忽视了。由于在现实世界中有许多应用,安全是算法的一个重要部分。本文提议了一种缓解方法,用于对拟议的6G毫米波(mmWave)射线6G机器学习模型进行对抗性攻击。对机器学习模型进行对抗性攻击的主要想法是,通过操纵经过训练的6G应用程序的深层次学习模型来应用MmmWabisam预测使用案例,产生错误的结果。我们还介绍了6G毫米波安全度的对抗性学习减缓方法,用快速梯度信号方法进行预测。辩护模型和未拆除模型的平均差非常近。