With the advent of 5G commercialization, the need for more reliable, faster, and intelligent telecommunication systems is envisaged for the next generation beyond 5G (B5G) radio access technologies. Artificial Intelligence (AI) and Machine Learning (ML) are immensely popular in service layer applications and have been proposed as essential enablers in many aspects of 5G and beyond networks, from IoT devices and edge computing to cloud-based infrastructures. However, existing 5G ML-based security surveys tend to emphasize AI/ML model performance and accuracy more than the models' accountability and trustworthiness. In contrast, this paper explores the potential of Explainable AI (XAI) methods, which would allow stakeholders in 5G and beyond to inspect intelligent black-box systems used to secure next-generation networks. The goal of using XAI in the security domain of 5G and beyond is to allow the decision-making processes of ML-based security systems to be transparent and comprehensible to 5G and beyond stakeholders, making the systems accountable for automated actions. In every facet of the forthcoming B5G era, including B5G technologies such as ORAN, zero-touch network management, and end-to-end slicing, this survey emphasizes the role of XAI in them that the general users would ultimately enjoy. Furthermore, we presented the lessons from recent efforts and future research directions on top of the currently conducted projects involving XAI.
翻译:暂无翻译