We propose an optimistic model-based algorithm, dubbed SMRL, for finite-horizon episodic reinforcement learning (RL) when the transition model is specified by exponential family distributions with $d$ parameters and the reward is bounded and known. SMRL uses score matching, an unnormalized density estimation technique that enables efficient estimation of the model parameter by ridge regression. Under standard regularity assumptions, SMRL achieves $\tilde O(d\sqrt{H^3T})$ online regret, where $H$ is the length of each episode and $T$ is the total number of interactions (ignoring polynomial dependence on structural scale parameters).
翻译:我们建议采用一个乐观的模型算法,称为SMRL(SMRL ), 用于在过渡模式由具有美元参数的指数家庭分布和奖赏的界限和已知时,使用限量家庭分布和奖赏。 SMRL使用得分匹配,这是一种不正规的密度估计技术,能够通过山脊回归对模型参数进行有效估计。 根据标准常态假设,SMRL 实现 $\tilde O(d\sqrt{H}3T}) 的在线遗憾,即每集的长度为H$,互动的总数为$T$(与结构尺度参数不相称的多元依赖)。