Unmanned aerial vehicles (UAVs) are often used for navigating dangerous terrains, however they are difficult to pilot. Due to complex input-output mapping schemes, limited perception, the complex system dynamics and the need to maintain a safe operation distance, novice pilots experience difficulties in performing safe landings in obstacle filled environments. In this work we propose a shared autonomy approach that assists novice pilots to perform safe landings on one of several elevated platforms at a proficiency equal to or greater than experienced pilots. Our approach consists of two modules, a perceptual module and a policy module. The perceptual module compresses high dimensionality RGB-D images into a latent vector trained with a cross-modal variational auto-encoder. The policy module provides assistive control inputs trained with the reinforcement algorithm TD3. We conduct a user study (n=33) where participants land a simulated drone with and without the use of the assistant. Despite the goal platform not being known to the assistant, participants of all skill levels were able to outperform experienced participants while assisted in the task.
翻译:无人驾驶航空飞行器(无人驾驶飞行器)通常用于航行危险地形,但难以进行试验。由于输入-产出绘图计划复杂,认知有限,系统动态复杂,而且需要保持安全操作距离,新飞行员在设置障碍时难以安全着陆。在这项工作中,我们提出一个共同自主办法,协助新飞行员以相当于或大于有经验的飞行员的熟练程度在几个高空平台上安全着陆。我们的方法由两个模块组成,一个感知模块和一个政策模块。概念模块将高维度 RGB-D 图像压缩成由跨模式变异自动编码器培训的潜向矢量。政策模块提供辅助控制投入,由强化算法TD3培训。我们进行了用户研究(n=33),参加者在使用辅助人员的情况下安放模拟无人驾驶无人驾驶无人驾驶无人驾驶飞机。尽管助手并不了解目标平台,但所有技能级别的参与者在执行任务时都能够超越有经验的参与者。