Reinforcement learning (RL) in episodic, factored Markov decision processes (FMDPs) is studied. We propose an algorithm called FMDP-BF, which leverages the factorization structure of FMDP. The regret of FMDP-BF is shown to be exponentially smaller than that of optimal algorithms designed for non-factored MDPs, and improves on the best previous result for FMDPs~\citep{osband2014near} by a factored of $\sqrt{H|\mathcal{S}_i|}$, where $|\mathcal{S}_i|$ is the cardinality of the factored state subspace and $H$ is the planning horizon. To show the optimality of our bounds, we also provide a lower bound for FMDP, which indicates that our algorithm is near-optimal w.r.t. timestep $T$, horizon $H$ and factored state-action subspace cardinality. Finally, as an application, we study a new formulation of constrained RL, known as RL with knapsack constraints (RLwK), and provides the first sample-efficient algorithm based on FMDP-BF.
翻译:正在研究一个叫做FMDP-BF的算法,该算法利用FMDP-BF的保分结构。FMDP-BF的遗憾比为非保分的 MDP 设计的最佳算法的遗憾小得多,并且用一个因数($)来改进FMDPsçççciep{osband2014年ear}FMDPs/cip{osband2014年]的最佳前结果,该乘数为$(sqrt{H ⁇ mathcal{Sçácal{S ⁇ i ⁇ $),其中,美元是保分的州子空间的基点,而$($)是保分的基点空间,而$($)则是规划的基点。为了显示我们界限的最佳性,我们还为FMDP提供了较低的约束,这表明我们的算法几乎是最佳的 w.r.t.t.时间步($T)、地平线($H)和保分的州-行动次基点基点(PMFMF-DP)的基点。最后,我们研究一种受限制的受限的受限RL-MFMFML制新配方。