We propose two algorithms for episodic stochastic shortest path problems with linear function approximation. The first is computationally expensive but provably obtains $\tilde{O} (\sqrt{B_\star^3 d^3 K/c_{min}} )$ regret, where $B_\star$ is a (known) upper bound on the optimal cost-to-go function, $d$ is the feature dimension, $K$ is the number of episodes, and $c_{min}$ is the minimal cost of non-goal state-action pairs (assumed to be positive). The second is computationally efficient in practice, and we conjecture that it obtains the same regret bound. Both algorithms are based on an optimistic least-squares version of value iteration analogous to the finite-horizon backward induction approach from Jin et al. 2020. To the best of our knowledge, these are the first regret bounds for stochastic shortest path that are independent of the size of the state and action spaces.
翻译:我们建议使用两种算法来解决直线函数近似线性功能的直径短路径问题。 一种是计算成本昂贵, 但可能获得 $\ tilde{O} (\sqrt{B ⁇ star}3 d ⁇ 3 K/c ⁇ min}} 遗憾, 美元是( 已知的) 最佳成本- go 函数的上限, 美元是特质维度, 美元是事件数, 美元是非目标州- 行动对( 假设为正) 的最低成本 。 第二种是计算效率高, 我们推测它获得同样的遗憾。 这两种算法都基于一个最乐观的最小值斜度版本, 类似于金等人( Jin et al. ) 2020 的 有限- horizon 后向感应方法 。 据我们所知, 这些是非目标州- 行动对子( ) 最短路径的首个遗憾界限, 与状态和动作空间的大小无关 。