The recent boom in the literature on entropy-regularized reinforcement learning (RL) approaches reveals that Kullback-Leibler (KL) regularization brings advantages to RL algorithms by canceling out errors under mild assumptions. However, existing analyses focus on fixed regularization with a constant weighting coefficient and do not consider cases where the coefficient is allowed to change dynamically. In this paper, we study the dynamic coefficient scheme and present the first asymptotic error bound. Based on the dynamic coefficient error bound, we propose an effective scheme to tune the coefficient according to the magnitude of error in favor of more robust learning. Complementing this development, we propose a novel algorithm, Geometric Value Iteration (GVI), that features a dynamic error-aware KL coefficient design with the aim of mitigating the impact of errors on performance. Our experiments demonstrate that GVI can effectively exploit the trade-off between learning speed and robustness over uniform averaging of a constant KL coefficient. The combination of GVI and deep networks shows stable learning behavior even in the absence of a target network, where algorithms with a constant KL coefficient would greatly oscillate or even fail to converge.
翻译:最近,关于昆虫正规化强化学习(RL)方法文献的繁荣表明,Kullback-Leiber(KL)正规化(KL)正规化通过取消轻度假设下的错误,为RL算法带来了优势。然而,现有的分析侧重于固定的正规化,同时具有恒定加权系数,而没有考虑允许系数动态变化的案例。在本文中,我们研究了动态系数办法,并提出了第一个无症状的错误。根据动态系数误差约束,我们提出了一个有效的办法,根据误差幅度调整系数,以利于更稳健的学习。为了补充这一发展,我们提出了一种新的算法,即几何值迭值系数(GVI),目的是减少差错对绩效的影响。我们的实验表明,GVI可以有效地利用学习速度和稳健度之间的权衡。GVI和深层次网络的结合表明,即使在没有目标网络的情况下,学习行为也稳定,在这个网络中,使用恒定的KL系数的算法会大大的或甚至无法趋同。