Fitted Q-iteration (FQI) and its entropy-regularized variant, soft FQI, are central tools for value-based model-free offline reinforcement learning, but can behave poorly under function approximation and distribution shift. In the entropy-regularized setting, we show that the soft Bellman operator is locally contractive in the stationary norm of the soft-optimal policy, rather than in the behavior norm used by standard FQI. This geometric mismatch explains the instability of soft Q-iteration with function approximation in the absence of Bellman completeness. To restore contraction, we introduce stationary-reweighted soft FQI, which reweights each regression update using the stationary distribution of the current policy. We prove local linear convergence under function approximation with geometrically damped weight-estimation errors, assuming approximate realizability. Our analysis further suggests that global convergence may be recovered by gradually reducing the softmax temperature, and that this continuation approach can extend to the hardmax limit under a mild margin condition.
翻译:拟合Q迭代(FQI)及其熵正则化变体——软FQI,是基于价值的无模型离线强化学习的核心工具,但在函数逼近和分布偏移下可能表现不佳。在熵正则化设定中,我们证明了软贝尔曼算子在软最优策略的平稳范数下是局部压缩的,而非标准FQI所使用的行为范数。这种几何失配解释了在缺乏贝尔曼完备性时,软Q迭代在函数逼近下的不稳定性。为恢复压缩性,我们引入了平稳重加权软FQI,该方法使用当前策略的平稳分布对每个回归更新进行重加权。在近似可实现性假设下,我们证明了在函数逼近下具有几何衰减的权重估计误差时,算法可实现局部线性收敛。我们的分析进一步表明,通过逐步降低softmax温度可能恢复全局收敛性,且这种延拓方法在温和的边界条件下可扩展至hardmax极限。