A fundamental question in reinforcement learning theory is: suppose the optimal value functions are linear in given features, can we learn them efficiently? This problem's counterpart in supervised learning, linear regression, can be solved both statistically and computationally efficiently. Therefore, it was quite surprising when a recent work \cite{kane2022computational} showed a computational-statistical gap for linear reinforcement learning: even though there are polynomial sample-complexity algorithms, unless NP = RP, there are no polynomial time algorithms for this setting. In this work, we build on their result to show a computational lower bound, which is exponential in feature dimension and horizon, for linear reinforcement learning under the Randomized Exponential Time Hypothesis. To prove this we build a round-based game where in each round the learner is searching for an unknown vector in a unit hypercube. The rewards in this game are chosen such that if the learner achieves large reward, then the learner's actions can be used to simulate solving a variant of 3-SAT, where (a) each variable shows up in a bounded number of clauses (b) if an instance has no solutions then it also has no solutions that satisfy more than (1-$\epsilon$)-fraction of clauses. We use standard reductions to show this 3-SAT variant is approximately as hard as 3-SAT. Finally, we also show a lower bound optimized for horizon dependence that almost matches the best known upper bound of $\exp(\sqrt{H})$.
翻译:强化学习理论中的一个基本问题是 : 假设最佳值函数在给定特性中是线性, 我们能否有效地学习它们? 这一问题在受监督的学习、 线性回归中的对应值可以在统计上和计算上有效解决 。 因此, 当最近的一项工作 \ cite{ kane2022computational} 显示线性强化学习的计算- 统计学差距时, 相当令人惊讶 : 尽管有多元样本- 兼容性算法, 除非 NP = RP, 否则, 此设置不会出现任何多数值值( 多数值算法) 。 在这项工作中, 我们可以利用它们的结果来显示一个在特性尺寸和地平线上快速递增的较低数值。 为了证明这一点, 我们每回合的学习者在单位超立方块中寻找未知矢量矢量的矢量。 如果学习者获得大奖赏, 那么, 学习者的行动可以用来模拟3- SAT 的变量的变数, 在特性尺寸和地平值中, 每个变数都显示一个固定的数值, 。</s>