Deep Reinforcement Learning (DRL) algorithms have been increasingly employed during the last decade to solve various decision-making problems such as autonomous driving and robotics. However, these algorithms have faced great challenges when deployed in safety-critical environments since they often exhibit erroneous behaviors that can lead to potentially critical errors. One way to assess the safety of DRL agents is to test them to detect possible faults leading to critical failures during their execution. This raises the question of how we can efficiently test DRL policies to ensure their correctness and adherence to safety requirements. Most existing works on testing DRL agents use adversarial attacks that perturb states or actions of the agent. However, such attacks often lead to unrealistic states of the environment. Their main goal is to test the robustness of DRL agents rather than testing the compliance of agents' policies with respect to requirements. Due to the huge state space of DRL environments, the high cost of test execution, and the black-box nature of DRL algorithms, the exhaustive testing of DRL agents is impossible. In this paper, we propose a Search-based Testing Approach of Reinforcement Learning Agents (STARLA) to test the policy of a DRL agent by effectively searching for failing executions of the agent within a limited testing budget. We use machine learning models and a dedicated genetic algorithm to narrow the search towards faulty episodes. We apply STARLA on Deep-Q-Learning agents which are widely used as benchmarks and show that it significantly outperforms Random Testing by detecting more faults related to the agent's policy. We also investigate how to extract rules that characterize faulty episodes of the DRL agent using our search results. Such rules can be used to understand the conditions under which the agent fails and thus assess its deployment risks.
翻译:过去十年来,深度强化学习(DRL)算法被越来越多地用于解决各种决策问题,如自主驾驶和机器人等。然而,这些算法在安全临界环境中部署时面临巨大挑战,因为这些算法往往表现出可能导致潜在严重错误的错误行为。评估DRL代理商安全的方法之一是测试它们是否发现可能导致执行过程中重大失败的可能错误。这提出了我们如何有效地测试DRL政策以确保其正确性和遵守安全要求的问题。大多数测试DRL代理商的现有工作都使用侵入状态或代理商行动的对抗性攻击。然而,这种攻击往往导致不现实的环境状态。其主要目的是测试DRL代理商的稳健性,而不是测试其符合要求的政策。由于DRL环境的状态空间巨大,测试的高成本,以及DRL算法的黑箱性质,DRL代理商的全面测试是不可能的。在这个文件中,我们提议基于搜索基础的深度测试QRUL规则的深度部署方法, 从而用STARA的精度来测试其精度测试其精度的精度的精度,而精度测试其精度测试其精度的精度 。