With the advent of 5G commercialization, the need for more reliable, faster, and intelligent telecommunication systems are envisaged for the next generation beyond 5G (B5G) radio access technologies. Artificial Intelligence (AI) and Machine Learning (ML) are not just immensely popular in the service layer applications but also have been proposed as essential enablers in many aspects of B5G networks, from IoT devices and edge computing to cloud-based infrastructures. However, most of the existing surveys in B5G security focus on the performance of AI/ML models and their accuracy, but they often overlook the accountability and trustworthiness of the models' decisions. Explainable AI (XAI) methods are promising techniques that would allow system developers to identify the internal workings of AI/ML black-box models. The goal of using XAI in the security domain of B5G is to allow the decision-making processes of the security of systems to be transparent and comprehensible to stakeholders making the systems accountable for automated actions. In every facet of the forthcoming B5G era, including B5G technologies such as RAN, zero-touch network management, E2E slicing, this survey emphasizes the role of XAI in them and the use cases that the general users would ultimately enjoy. Furthermore, we presented the lessons learned from recent efforts and future research directions on top of the currently conducted projects involving XAI.
翻译:随着5G商业化的到来,预计下一代人除了5G(B5G)无线电接入技术外,还需要更可靠、更快和智能的电信系统,人工智能(AI)和机器学习(ML)不仅在服务层应用中非常受欢迎,而且还被提议在B5G网络的许多方面,从IOT装置和边缘计算到云基基础设施,作为必要的促进因素;然而,B5G安全调查中的大多数现有调查侧重于AI/ML模型的性能和准确性,但它们往往忽视模型决定的问责制和可信赖性。可解释的AI(XAI)方法是有希望的技术,使系统开发者能够确定AI/ML黑盒模型的内部工作。在B5G安全领域使用XAA的很多方面,目标是使系统安全决策进程透明,便于利益攸关方对自动化行动负责。在即将到来的B5G时代的每一个方面,包括RAN、0TU网络管理等B5G技术、E2E黑盒管理、EAAI(X)方法都是很有希望的技术,使系统开发者能够了解目前进行的最高努力。