The remarkable success of reinforcement learning (RL) heavily relies on observing the reward of every visited state-action pair. In many real world applications, however, an agent can observe only a score that represents the quality of the whole trajectory, which is referred to as the {\em trajectory-wise reward}. In such a situation, it is difficult for standard RL methods to well utilize trajectory-wise reward, and large bias and variance errors can be incurred in policy evaluation. In this work, we propose a novel offline RL algorithm, called Pessimistic vAlue iteRaTion with rEward Decomposition (PARTED), which decomposes the trajectory return into per-step proxy rewards via least-squares-based reward redistribution, and then performs pessimistic value iteration based on the learned proxy reward. To ensure the value functions constructed by PARTED are always pessimistic with respect to the optimal ones, we design a new penalty term to offset the uncertainty of the proxy reward. For general episodic MDPs with large state space, we show that PARTED with overparameterized neural network function approximation achieves an $\tilde{\mathcal{O}}(D_{\text{eff}}H^2/\sqrt{N})$ suboptimality, where $H$ is the length of episode, $N$ is the total number of samples, and $D_{\text{eff}}$ is the effective dimension of the neural tangent kernel matrix. To further illustrate the result, we show that PARTED achieves an $\tilde{\mathcal{O}}(dH^3/\sqrt{N})$ suboptimality with linear MDPs, where $d$ is the feature dimension, which matches with that with neural network function approximation, when $D_{\text{eff}}=dH$. To the best of our knowledge, PARTED is the first offline RL algorithm that is provably efficient in general MDP with trajectory-wise reward.
翻译:强化学习( RL) 的显著成功很大程度上依赖于观察每个被访问的州- 对应方的轨迹的奖赏。 然而,在许多真实的世界应用中, 代理商只能观察到代表整个轨迹质量的分数, 这被称为 ~它们轨迹的奖赏 。 在这种情况下, 标准 RL 方法很难很好地利用轨迹奖赏, 在政策评估中可能会出现巨大的偏差和差异错误 。 在这项工作中, 我们提出一个新的离线 RL 算法, 叫做 Pessicistic vall iteRaTion (PARTED), 它让轨迹通过以最小值为基础的奖赏再回到每步代理奖赏的质量, 而在所学的代理奖赏奖励中, 标准RL 方法的数值总是与最佳值相比, 我们设计一个新的惩罚术语来抵消代理奖赏的不确定性 。 对于带有大空间的普通直线值的MDP, 我们展示的是, 该轨迹的轨迹与纯值的轨迹返回, 以纯值 美元= 美元网络的直径显示, 其中的值是亚值的值 。