In many real-world problems, the learning agent needs to learn a problem's abstractions and solution simultaneously. However, most such abstractions need to be designed and refined by hand for different problems and domains of application. This paper presents a novel top-down approach for constructing state abstractions while carrying out reinforcement learning. Starting with state variables and a simulator, it presents a novel domain-independent approach for dynamically computing an abstraction based on the dispersion of Q-values in abstract states as the agent continues acting and learning. Extensive empirical evaluation on multiple domains and problems shows that this approach automatically learns abstractions that are finely-tuned to the problem, yield powerful sample efficiency, and result in the RL agent significantly outperforming existing approaches.
翻译:在许多现实世界问题中,学习代理机构需要同时学习问题的抽象和解决方案。然而,大多数此类抽象需要针对不同的问题和应用领域手工设计和完善。本文件介绍了一种创新的自上而下的方法,用于在进行强化学习的同时构建国家抽象。从国家变量和模拟器开始,它提出了一种新的领域独立方法,用于动态计算基于抽象国家中Q值分散的抽象的抽象,因为代理机构在继续采取行动和学习。对多个领域和问题的广泛经验性评估表明,这种方法自动地学习了精细适应问题的抽象,产生了强大的样本效率,并导致RL代理机构显著优于现有方法。