We introduce Air Learning, an open-source simulator, and a gym environment for deep reinforcement learning research on resource-constrained aerial robots. Equipped with domain randomization, Air Learning exposes a UAV agent to a diverse set of challenging scenarios. We seed the toolset with point-to-point obstacle avoidance tasks in three different environments and Deep Q Networks (DQN) and Proximal Policy Optimization (PPO) trainers. Air Learning assesses the policies' performance under various quality-of-flight (QoF) metrics, such as the energy consumed, endurance, and the average trajectory length, on resource-constrained embedded platforms like a Raspberry Pi. We find that the trajectories on an embedded Ras-Pi are vastly different from those predicted on a high-end desktop system, resulting in up to $40\%$ longer trajectories in one of the environments. To understand the source of such discrepancies, we use Air Learning to artificially degrade high-end desktop performance to mimic what happens on a low-end embedded system. We then propose a mitigation technique that uses the hardware-in-the-loop to determine the latency distribution of running the policy on the target platform (onboard compute on aerial robot). A randomly sampled latency from the latency distribution is then added as an artificial delay within the training loop. Training the policy with artificial delays allows us to minimize the hardware gap (discrepancy in the flight time metric reduced from 37.73\% to 0.5\%). Thus, Air Learning with hardware-in-the-loop characterizes those differences and exposes how the onboard compute's choice affects the aerial robot's performance. We also conduct reliability studies to assess the effect of sensor failures on the learned policies. All put together, \airl enables a broad class of deep RL research on UAVs. The source code is available at:~\texttt{\url{http://bit.ly/2JNAVb6}}.
翻译:我们引入 Air Learning, 一个开放源代码模拟器, 以及一个健身环境, 用于在资源限制的航空机器人方面进行深度强化学习研究。 安装了域随机化, Are Learning 将一个UAV代理器暴露在一系列挑战性假设中。 我们发现, 嵌入的 Ras- Pi 的轨迹与高端桌面系统中的预示的轨迹有很大不同, 导致在某个环境中使用40美元更长的轨迹优化。 为了了解这种差异的根源, 我们使用 Ale Lear Lear Lear 学会人为地将高端桌面性能降至低端内置系统发生的情况。 我们用一个缓解性技术, 将Ras- Pi 的Ras- Pi 路标的轨迹与高端桌面系统中的预设障碍相去。 Rexcial- droadal- droadality ladeal deal lax lax lax lax lax lax lade lade lax lax ladeal lax lade lade lade lade lade lade lade la la lade lade lade lade lade lade lade lade la laut laut lade la la laut laut laut laut lade la la la laut laut laut laut la la la la lader la la laut la la la la laut la la la la laut la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la la