Model-free deep reinforcement learning has achieved great success in many domains, such as video games, recommendation systems and robotic control tasks. In continuous control tasks, widely used policies with Gaussian distributions results in ineffective exploration of environments and limited performance of algorithms in many cases. In this paper, we propose a density-free off-policy algorithm, Generative Actor-Critic(GAC), using the push-forward model to increase the expressiveness of policies, which also includes an entropy-like technique, MMD-entropy regularizer, to balance the exploration and exploitation. Additionnally, we devise an adaptive mechanism to automatically scale this regularizer, which further improves the stability and robustness of GAC. The experiment results show that push-forward policies possess desirable features, such as multi-modality, which can improve the efficiency of exploration and asymptotic performance of algorithms obviously.
翻译:在许多领域,如电子游戏、建议系统和机器人控制任务等,没有模型的深层强化学习取得了巨大成功。在连续的控制任务中,使用高斯分布法的广泛政策导致许多情况下对环境的探索无效,算法的绩效有限。在本文中,我们提议采用无密度的非政策演算法,即 " 创造行动者-批评(GAC) ",使用推向模式来提高政策的表达性,这还包括一种类似微粒的技术,即MMMD-杂质定律器,以平衡勘探和开发。此外,我们设计了一种适应性机制,以自动扩大这种定律,从而进一步提高GAC的稳定性和稳健性。实验结果表明,推向政策具有可取的特征,如多种模式,它们可以明显地提高探索效率和算法的无干扰性性。