When function approximation is used, solving the Bellman optimality equation with stability guarantees has remained a major open problem in reinforcement learning for decades. The fundamental difficulty is that the Bellman operator may become an expansion in general, resulting in oscillating and even divergent behavior of popular algorithms like Q-learning. In this paper, we revisit the Bellman equation, and reformulate it into a novel primal-dual optimization problem using Nesterov's smoothing technique and the Legendre-Fenchel transformation. We then develop a new algorithm, called Smoothed Bellman Error Embedding, to solve this optimization problem where any differentiable function class may be used. We provide what we believe to be the first convergence guarantee for general nonlinear function approximation, and analyze the algorithm's sample complexity. Empirically, our algorithm compares favorably to state-of-the-art baselines in several benchmark control problems.
翻译:当使用功能近似值时,用稳定性保障解决贝尔曼最佳性方程式在加强学习方面一直是数十年来一个主要的未决问题。 根本的困难在于贝尔曼操作员可能会成为一般的扩展,导致Q- 学习等流行算法的波动甚至不同行为。 在本文中,我们重新审视贝尔曼方程式,并重新将其转化为新颖的原始优化问题,使用Nesterov的平滑技术和Tlunsre- Fenchel转换。然后我们开发一种新的算法,称为“ 平滑的贝尔曼错误嵌入 ”, 以解决使用任何不同功能类别的优化问题。 我们提供了我们认为是一般非线性函数近似的第一个趋同保证, 并分析了算法的样本复杂性。 在几个基准控制问题中, 我们的算法优于最先进的基线 。