We study episodic reinforcement learning under unknown adversarial corruptions in both the rewards and the transition probabilities of the underlying system. We propose new algorithms which, compared to the existing results in (Lykouris et al., 2020), achieve strictly better regret bounds in terms of total corruptions for the tabular setting. To be specific, firstly, our regret bounds depend on more precise numerical values of total rewards corruptions and transition corruptions, instead of only on the total number of corrupted episodes. Secondly, our regret bounds are the first of their kind in the reinforcement learning setting to have the number of corruptions show up additively with respect to $\min\{\sqrt{T}, \text{PolicyGapComplexity}\}$ rather than multiplicatively. Our results follow from a general algorithmic framework that combines corruption-robust policy elimination meta-algorithms, and plug-in reward-free exploration sub-algorithms. Replacing the meta-algorithm or sub-algorithm may extend the framework to address other corrupted settings with potentially more structure.
翻译:具体地说,我们的遗憾界限取决于总奖赏腐败和过渡腐败的更精确的数字值,而不仅仅取决于腐败事件的总数。第二,我们的遗憾界限是强化学习中第一个在强化学习设置中将腐败数量与美元(min-sqrt{T}),\ text{PolicyGapComplility} $(而不是倍增性)相加而成的新算法。我们的结果可以追溯到一个总体算法框架,它把腐败-腐败政策消除的元-负负负值和无报酬的无报酬勘探子等级结合起来。