Artificial Intelligence (AI) is one of the disruptive technologies that is shaping the future. It has growing applications for data-driven decisions in major smart city solutions, including transportation, education, healthcare, public governance, and power systems. At the same time, it is gaining popularity in protecting critical cyber infrastructure from cyber threats, attacks, damages, or unauthorized access. However, one of the significant issues of those traditional AI technologies (e.g., deep learning) is that the rapid progress in complexity and sophistication propelled and turned out to be uninterpretable black boxes. On many occasions, it is very challenging to understand the decision and bias to control and trust systems' unexpected or seemingly unpredictable outputs. It is acknowledged that the loss of control over interpretability of decision-making becomes a critical issue for many data-driven automated applications. But how may it affect the system's security and trustworthiness? This chapter conducts a comprehensive study of machine learning applications in cybersecurity to indicate the need for explainability to address this question. While doing that, this chapter first discusses the black-box problems of AI technologies for Cybersecurity applications in smart city-based solutions. Later, considering the new technological paradigm, Explainable Artificial Intelligence (XAI), this chapter discusses the transition from black-box to white-box. This chapter also discusses the transition requirements concerning the interpretability, transparency, understandability, and Explainability of AI-based technologies in applying different autonomous systems in smart cities. Finally, it has presented some commercial XAI platforms that offer explainability over traditional AI technologies before presenting future challenges and opportunities.
翻译:人工智能(AI)是影响未来的破坏性技术之一,在主要智能城市解决方案(包括交通、教育、医疗保健、公共治理和电力系统)中越来越多地应用数据驱动决策,包括交通、教育、医疗保健、公共治理和电力系统。与此同时,在保护关键的网络基础设施免遭网络威胁、攻击、破坏或未经授权的接入方面,它越来越受欢迎。然而,这些传统人工智能技术(例如深层学习)的一个重要问题是,在复杂和精密的推动下,其快速进步是无法解释的黑盒。在许多场合,人们很难理解控制和信任系统难以预料或似乎无法预测的传统平台产出的决策和偏向。人们认识到,失去对决策可解释性的控制,成为许多数据驱动的自动应用程序的关键问题。但是,它会如何影响系统的安全和可信度? 本章对网络安全方面的机器学习应用进行了全面研究,以表明有必要解释这一问题。在做这一章节中首先讨论了在智能城市解决方案中提供网络安全应用的黑盒问题。在下文中,考虑到新的技术可解释性模式、解释性、解释性、解释性、解释性、解释性、解释性最后的章节、解释性、解释性、解释性、解释性、解释性、解释性、解释性、解释性、解释性、解释性、解释性、解释性、解释性、解释性、解释性、解释性、解释性、解释性、解释性、解释性、解释性、解释性、解释性、解释性、解释性、解释性、解释性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性能、性、性能、性能、性能、性能、性能、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、性、