Most reinforcement learning algorithms are based on a key assumption that Markov decision processes (MDPs) are stationary. However, non-stationary MDPs with dynamic action space are omnipresent in real-world scenarios. Yet problems of dynamic action space reinforcement learning have been studied by many previous works, how to choose valuable actions from new and unseen actions to improve learning efficiency remains unaddressed. To tackle this problem, we propose an intelligent Action Pick-up (AP) algorithm to autonomously choose valuable actions that are most likely to boost performance from a set of new actions. In this paper, we first theoretically analyze and find that a prior optimal policy plays an important role in action pick-up by providing useful knowledge and experience. Then, we design two different AP methods: frequency-based global method and state clustering-based local method, based on the prior optimal policy. Finally, we evaluate the AP on two simulated but challenging environments where action spaces vary over time. Experimental results demonstrate that our proposed AP has advantages over baselines in learning efficiency.
翻译:大多数强化学习算法基于马尔可夫决策过程 (MDPs) 是定常的关键假设。然而,具有动态行动空间的非定常 MDPs 在实际场景中普遍存在。然而,先前的研究已经研究了动态行动空间的强化学习问题,但如何从新和未见过的行动中选择有价值的行动以提高学习效率仍未得到解决。为了解决这个问题,我们提出了一个智能的行动选择 (AP) 算法,以从一组新行动中自动选择最有可能提高性能的有价值的行动。在本文中,我们首先从理论上分析并发现一个先前的最优策略通过提供有用的知识和经验,在行动选择中起着重要作用。然后,我们基于先前的最优策略设计了两种不同的行动选择方法:基于频率的全局方法和基于状态聚类的局部方法。最后,我们在两个模拟但具有挑战性的环境中评估了行动选择算法,其中行动空间随时间变化。实验结果表明,我们提出的行动选择算法在学习效率方面优于基线算法。