Reinforcement Learning (RL), one of the core paradigms in machine learning, learns to make decisions based on real-world experiences. This approach has significantly advanced AI applications across various domains, notably in smart grid optimization and smart home automation. However, the proliferation of RL in these critical sectors has also exposed them to sophisticated adversarial attacks that target the underlying neural network policies, compromising system integrity. Given the pivotal role of RL in enhancing the efficiency and sustainability of smart grids and the personalized convenience in smart homes, ensuring the security of these systems is paramount. This paper aims to bolster the resilience of RL frameworks within these specific contexts, addressing the unique challenges posed by the intricate and potentially adversarial environments of smart grids and smart homes. We provide a thorough review of the latest adversarial RL threats and outline effective defense strategies tailored to safeguard these applications. Our comparative analysis sheds light on the nuances of adversarial tactics against RL-driven smart systems and evaluates the defense mechanisms, focusing on their innovative contributions, limitations, and the compromises they entail. By concentrating on the smart grid and smart home scenarios, this survey equips ML developers and researchers with the insights needed to secure RL applications against emerging threats, ensuring their reliability and safety in our increasingly connected world.
翻译:暂无翻译