We introduce Air Learning, an open-source simulator, and a gym environment for deep reinforcement learning research on resource-constrained aerial robots. Equipped with domain randomization, Air Learning exposes a UAV agent to a diverse set of challenging scenarios. We seed the toolset with point-to-point obstacle avoidance tasks in three different environments and Deep Q Networks (DQN) and Proximal Policy Optimization (PPO) trainers. Air Learning assesses the policies' performance under various quality-of-flight (QoF) metrics, such as the energy consumed, endurance, and the average trajectory length, on resource-constrained embedded platforms like a Raspberry Pi. We find that the trajectories on an embedded Ras-Pi are vastly different from those predicted on a high-end desktop system, resulting in up to 40% longer trajectories in one of the environments. To understand the source of such discrepancies, we use Air Learning to artificially degrade high-end desktop performance to mimic what happens on a low-end embedded system. We then propose a mitigation technique that uses the hardware-in-the-loop to determine the latency distribution of running the policy on the target platform (onboard compute on the aerial robot). A randomly sampled latency from the latency distribution is then added as an artificial delay within the training loop. Training the policy with artificial delays allows us to minimize the hardware gap (discrepancy in the flight time metric reduced from 37.73% to 0.5%). Thus, Air Learning with hardware-in-the-loop characterizes those differences and exposes how the onboard compute's choice affects the aerial robot's performance. We also conduct reliability studies to assess the effect of sensor failures on the learned policies. All put together, Air Learning enables a broad class of deep RL research on UAVs. The source code is available at:http://bit.ly/2JNAVb6.
翻译:我们引入 Air Learning, 一个开放源代码模拟器, 以及一个健身环境, 用于在资源限制的航空机器人方面进行深入强化学习研究。 安装了域随机化, Are Learning 将UAV代理器暴露在一系列挑战性假设中。 我们在三个不同的环境以及深Q网络(DQN)和普罗克西马政策优化(PPPO)培训者中, 播种带有点到点障碍避免任务的工具。 空气学习在各种飞行质量( QOF) 标准下评估了政策绩效, 如能源消耗、耐力和平均轨距长度, 在资源限制的嵌入平台上( Raspberry Pi ) 。 我们发现嵌入的Ras- Pi 的轨迹与高端桌面系统(DQQQNN) 和 Proximal 政策优化(PPPPO) 。 为了了解这种差异的根源, 我们使用Air Lea Lead 学习来人为地将高端桌面表现到低端嵌嵌化系统发生的事情。 我们然后提议一个减缓的计算策略, 以降低调策略, 方向的策略分配策略是使用一个降低机路标路标路段的策略。