Solving sparse reward tasks through exploration is one of the major challenges in deep reinforcement learning, especially in three-dimensional, partially-observable environments. Critically, the algorithm proposed in this article uses a single human demonstration to solve hard-exploration problems. We train an agent on a combination of demonstrations and own experience to solve problems with variable initial conditions. We adapt this idea and integrate it with the proximal policy optimization (PPO). The agent is able to increase its performance and to tackle harder problems by replaying its own past trajectories prioritizing them based on the obtained reward and the maximum value of the trajectory. We compare different variations of this algorithm to behavioral cloning on a set of hard-exploration tasks in the Animal-AI Olympics environment. To the best of our knowledge, learning a task in a three-dimensional environment with comparable difficulty has never been considered before using only one human demonstration.
翻译:通过勘探解决微薄的奖励任务,是深层强化学习的主要挑战之一,特别是在三维、部分可观测环境中。关键是,本条中提议的算法使用单一的人类演示来解决难以探索的问题。我们用演示和自身经验相结合的方式培训一名代理人员来解决初始条件不一的问题。我们调整了这一想法,并将其与准政策优化相结合。该代理商能够通过重现其以往根据所获的奖励和轨道的最大价值排列其优先顺序的轨迹来提高其性能并解决更棘手的问题。我们把这一算法与动物-AI奥运环境中一套硬化任务的行为克隆作不同的比较。根据我们的知识,在具有类似困难的三维环境中学习一项任务,在使用人类演示之前从未被考虑过。