Inverse reinforcement learning aims to infer the reward function that explains expert behavior observed through trajectories of state--action pairs. A long-standing difficulty in classical IRL is the non-uniqueness of the recovered reward: many reward functions can induce the same optimal policy, rendering the inverse problem ill-posed. In this paper, we develop a statistical framework for Inverse Entropy-regularized Reinforcement Learning that resolves this ambiguity by combining entropy regularization with a least-squares reconstruction of the reward from the soft Bellman residual. This combination yields a unique and well-defined so-called least-squares reward consistent with the expert policy. We model the expert demonstrations as a Markov chain with the invariant distribution defined by an unknown expert policy $π^\star$ and estimate the policy by a penalized maximum-likelihood procedure over a class of conditional distributions on the action space. We establish high-probability bounds for the excess Kullback--Leibler divergence between the estimated policy and the expert policy, accounting for statistical complexity through covering numbers of the policy class. These results lead to non-asymptotic minimax optimal convergence rates for the least-squares reward function, revealing the interplay between smoothing (entropy regularization), model complexity, and sample size. Our analysis bridges the gap between behavior cloning, inverse reinforcement learning, and modern statistical learning theory.
翻译:逆强化学习旨在通过观察状态-动作对轨迹推断解释专家行为的奖励函数。经典逆强化学习中长期存在的困难是恢复的奖励函数不具唯一性:许多奖励函数可诱导相同的最优策略,导致逆问题不适定。本文为逆熵正则化强化学习建立了一个统计框架,通过将熵正则化与基于软贝尔曼残差的最小二乘奖励重构相结合,解决了这一模糊性问题。该组合产生了一个与专家策略一致的唯一且良定义的所谓最小二乘奖励。我们将专家演示建模为一个马尔可夫链,其不变分布由未知的专家策略$π^\star$定义,并通过在动作空间的条件分布类上执行惩罚极大似然估计来估计该策略。我们建立了估计策略与专家策略之间超额Kullback-Leibler散度的高概率界,通过策略类的覆盖数来刻画统计复杂度。这些结果导出了最小二乘奖励函数的非渐近极小极大最优收敛速率,揭示了平滑性(熵正则化)、模型复杂度与样本量之间的相互作用。我们的分析弥合了行为克隆、逆强化学习与现代统计学习理论之间的鸿沟。