Automated landing for Unmanned Aerial Vehicles (UAVs), like multirotor drones, requires intricate software encompassing control algorithms, obstacle avoidance, and machine vision, especially when landing markers assist. Failed landings can lead to significant costs from damaged drones or payloads and the time spent seeking alternative landing solutions. Therefore, it's important to fully test auto-landing systems through simulations before deploying them in the real-world to ensure safety. This paper proposes \tool, a reinforcement learning (RL) augmented search-based testing framework, which constructs diverse and real marker-based landing cases that involve safety violations. Specifically, \tool \ introduces a genetic algorithm (GA) to conservatively search for diverse static environment configurations offline and RL to aggressively manipulate dynamic objects' trajectories online to find potential vulnerabilities in the target deployment environment. Quantitative results reveal that our method generates up to 22.19\% more violation cases and nearly doubles the diversity of generated violation cases compared to baseline methods. Qualitatively, our method can discover those corner cases which would be missed by state-of-the-art algorithms. We demonstrate that select types of these corner cases can be confirmed via real-world testing with drones in the field.
翻译:暂无翻译