Inverse reinforcement learning (IRL) is the task of finding a reward function that generates a desired optimal policy for a given Markov Decision Process (MDP). This paper develops an information-theoretic lower bound for the sample complexity of the finite state, finite action IRL problem. A geometric construction of $\beta$-strict separable IRL problems using spherical codes is considered. Properties of the ensemble size as well as the Kullback-Leibler divergence between the generated trajectories are derived. The resulting ensemble is then used along with Fano's inequality to derive a sample complexity lower bound of $O(n \log n)$, where $n$ is the number of states in the MDP.
翻译:反强化学习(IRL)是寻找一种奖励功能的任务,该功能为特定马尔科夫决策程序(MDP)产生理想的最佳政策。本文针对有限状态(有限行动IRL问题)的抽样复杂性开发了一个信息理论下限。 考虑使用球形代码进行一个耗资\beta$-限制分离的IRL问题的几何构造。 生成了组合大小的属性以及生成的轨迹之间的 Kullback- Leiber 差异。 由此产生的共性随后与Fano的不平等一起被使用, 以得出一个更低的 $O(n\log n) 的样本复杂性, 其中, $n是 MDP 中的国家数量 。