The past decade has seen the rapid development of Reinforcement Learning, which acquires impressive performance with numerous training resources. However, one of the greatest challenges in RL is generalization efficiency (i.e., generalization performance in a unit time). This paper proposes a framework of Active Reinforcement Learning (ARL) over MDPs to improve generalization efficiency in a limited resource by instance selection. Given a number of instances, the algorithm chooses out valuable instances as training sets while training the policy, thereby costing fewer resources. Unlike existing approaches, we attempt to actively select and use training data rather than train on all the given data, thereby costing fewer resources. Furthermore, we introduce a general instance evaluation metrics and selection mechanism into the framework. Experiments results reveal that the proposed framework with Proximal Policy Optimization as policy optimizer can effectively improve generalization efficiency than unselect-ed and unbiased selected methods.
翻译:在过去的十年中,加强学习取得了令人印象深刻的业绩,它利用大量培训资源取得了令人印象深刻的成绩,然而,在学习领域的最大挑战之一是一般化效率(即单位时间的一般化业绩),本文件提出了一个框架,即通过实例选择,在有限的资源中,积极加强学习(ARL)相对于MDP提高一般化效率,以便通过实例选择,在有限的资源中提高一般化效率。鉴于若干情况,算法选择宝贵的实例作为培训机构,同时培训政策,从而降低成本。与现有的方法不同,我们试图积极选择和使用培训数据,而不是就所有特定数据进行培训,从而花费较少的资源。此外,我们在框架中引入了一般实例评价指标和选择机制。实验结果表明,以优化政策为政策优化工具的拟议框架能够有效地提高普遍化效率,而不是不选择和不偏重选择的方法。