In this paper, we propose a max-min entropy framework for reinforcement learning (RL) to overcome the limitation of the soft actor-critic (SAC) algorithm implementing the maximum entropy RL in model-free sample-based learning. Whereas the maximum entropy RL guides learning for policies to reach states with high entropy in the future, the proposed max-min entropy framework aims to learn to visit states with low entropy and maximize the entropy of these low-entropy states to promote better exploration. For general Markov decision processes (MDPs), an efficient algorithm is constructed under the proposed max-min entropy framework based on disentanglement of exploration and exploitation. Numerical results show that the proposed algorithm yields drastic performance improvement over the current state-of-the-art RL algorithms.
翻译:在本文中,我们提议了一个用于强化学习的最大限度摄性框架(RL),以克服在无模型抽样学习中实施最大摄性RL的软性行为者-critic(SAC)算法(SAC)的局限性。 最大摄性RL指导政策学习,以便在未来接触到高摄性国家,而拟议的最大摄性框架旨在学习访问低摄性国家,并最大限度地增加这些低摄性国家的酶,以促进更好的探索。 对于一般的Markov 决策过程(MDPs), 高效的算法是在基于勘探和开发的分解的拟议的最大摄性框架下构建的。 数字结果显示,拟议的算法比目前最先进的RL算法带来巨大的性能改进。