The recently successful Munchausen Reinforcement Learning (M-RL) features implicit Kullback-Leibler (KL) regularization by augmenting the reward function with logarithm of the current stochastic policy. Though significant improvement has been shown with the Boltzmann softmax policy, when the Tsallis sparsemax policy is considered, the augmentation leads to a flat learning curve for almost every problem considered. We show that it is due to the mismatch between the conventional logarithm and the non-logarithmic (generalized) nature of Tsallis entropy. Drawing inspiration from the Tsallis statistics literature, we propose to correct the mismatch of M-RL with the help of $q$-logarithm/exponential functions. The proposed formulation leads to implicit Tsallis KL regularization under the maximum Tsallis entropy framework. We show such formulation of M-RL again achieves superior performance on benchmark problems and sheds light on more general M-RL with various entropic indices $q$.
翻译:最近成功的M-RL(M-RL)在Munchausen Servicen Securement Leiber(KL)中表现了隐含的Kullback-Leibler(KL)的正规化,通过对当前随机政策的对数来增加奖励功能。虽然在Boltzmann软式马克思政策方面已经显示出了显著的改进,但当考虑Tsallis 稀释式马克思政策时,扩增导致几乎考虑的每一个问题都有一个平坦的学习曲线。我们表明,这是由于传统对数与Tsallis entropy(通用)性质Tsallis 之间的不匹配。我们从Tsallis统计文献中汲取了灵感,我们建议用$logarithm/Explential功能来纠正M-RL的不匹配。拟议配对Tsallis KL的正规化导致在最大 Tsallis entropy框架下隐含Tsalis 。我们再次展示M-RL的这种配方在基准问题上取得优异性表现,并在较普通的M-RL与各种昆虫指数上以$q$。