Despite the recent success of reinforcement learning in various domains, these approaches remain, for the most part, deterringly sensitive to hyper-parameters and are often riddled with essential engineering feats allowing their success. We consider the case of off-policy generative adversarial imitation learning, and perform an in-depth review, qualitative and quantitative, of the method. We show that forcing the learned reward function to be local Lipschitz-continuous is a sine qua non condition for the method to perform well. We then study the effects of this necessary condition and provide several theoretical results involving the local Lipschitzness of the state-value function. We complement these guarantees with empirical evidence attesting to the strong positive effect that the consistent satisfaction of the Lipschitzness constraint on the reward has on imitation performance. Finally, we tackle a generic pessimistic reward preconditioning add-on spawning a large class of reward shaping methods, which makes the base method it is plugged into provably more robust, as shown in several additional theoretical guarantees. We then discuss these through a fine-grained lens and share our insights. Crucially, the guarantees derived and reported in this work are valid for any reward satisfying the Lipschitzness condition, nothing is specific to imitation. As such, these may be of independent interest.
翻译:尽管最近在各个领域的强化学习中取得了成功,但这些方法在很大程度上仍然对超参数产生威慑性敏感,而且往往充满了基本工程成就,使其取得成功。我们考虑了非政策性对抗性模仿学习的案例,对方法进行了深入的定性和定量审查。我们表明,强迫学习的奖励功能是当地Lipschitz持续学习的一个必要条件,使这一方法能够很好地发挥作用。我们接着研究这一必要条件的效果,并提供涉及国家价值功能地方性利普西茨的一些理论结果。我们用经验证据来补充这些保证。我们用经验证据来证明,对利普西茨奖的制约的一贯满意度对模仿性表现具有强烈的积极效果。最后,我们处理一种一般性的悲观性奖励,即附加大量奖励的附加方法,使基本方法能够更牢固地发挥作用,正如其他一些理论保证所显示的那样。我们随后通过精细的透的透镜来讨论这些保证,并分享我们的洞察力。关键是,对利普西茨奖项的制约对于模仿性表现的一贯满意性,因此,在这种具体情况下,获得的保证是完全的。