We address the problem of model selection for the finite horizon episodic Reinforcement Learning (RL) problem where the transition kernel $P^*$ belongs to a family of models $\mathcal{P}^*$ with finite metric entropy. In the model selection framework, instead of $\mathcal{P}^*$, we are given $M$ nested families of transition kernels $\cP_1 \subset \cP_2 \subset \ldots \subset \cP_M$. We propose and analyze a novel algorithm, namely \emph{Adaptive Reinforcement Learning (General)} (\texttt{ARL-GEN}) that adapts to the smallest such family where the true transition kernel $P^*$ lies. \texttt{ARL-GEN} uses the Upper Confidence Reinforcement Learning (\texttt{UCRL}) algorithm with value targeted regression as a blackbox and puts a model selection module at the beginning of each epoch. Under a mild separability assumption on the model classes, we show that \texttt{ARL-GEN} obtains a regret of $\Tilde{\mathcal{O}}(d_{\mathcal{E}}^*H^2+\sqrt{d_{\mathcal{E}}^* \mathbb{M}^* H^2 T})$, with high probability, where $H$ is the horizon length, $T$ is the total number of steps, $d_{\mathcal{E}}^*$ is the Eluder dimension and $\mathbb{M}^*$ is the metric entropy corresponding to $\mathcal{P}^*$. Note that this regret scaling matches that of an oracle that knows $\mathcal{P}^*$ in advance. We show that the cost of model selection for \texttt{ARL-GEN} is an additive term in the regret having a weak dependence on $T$. Subsequently, we remove the separability assumption and consider the setup of linear mixture MDPs, where the transition kernel $P^*$ has a linear function approximation. With this low rank structure, we propose novel adaptive algorithms for model selection, and obtain (order-wise) regret identical to that of an oracle with knowledge of the true model class.


翻译:在模型选择框架里,我们得到的是一个最小的过渡核心 $\ c>1\\ subset\ \ c2\ subset\ pcrotots = droad= m$。 我们提议并分析一个新的算法, 即 $\ mathcal{ P { 美元 = 美元 = 美元 = 美元 = = 美元 = 美元 = = 美元 = = 美元 = = 美元 = = 美元 = = 美元 = = 美元 = = 美元 = = = = 美元 = = = = 美元 = = = = = 美元 = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = =

0
下载
关闭预览

相关内容

【图与几何深度学习】Graph and geometric deep learning,49页ppt
Stabilizing Transformers for Reinforcement Learning
专知会员服务
59+阅读 · 2019年10月17日
Keras François Chollet 《Deep Learning with Python 》, 386页pdf
专知会员服务
152+阅读 · 2019年10月12日
强化学习扫盲贴:从Q-learning到DQN
夕小瑶的卖萌屋
52+阅读 · 2019年10月13日
Hierarchically Structured Meta-learning
CreateAMind
26+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
28+阅读 · 2019年5月18日
强化学习的Unsupervised Meta-Learning
CreateAMind
17+阅读 · 2019年1月7日
已删除
将门创投
3+阅读 · 2018年10月11日
Hierarchical Imitation - Reinforcement Learning
CreateAMind
19+阅读 · 2018年5月25日
【学习】Hierarchical Softmax
机器学习研究会
4+阅读 · 2017年8月6日
强化学习 cartpole_a3c
CreateAMind
9+阅读 · 2017年7月21日
Arxiv
0+阅读 · 2021年9月13日
Arxiv
5+阅读 · 2020年6月16日
Multi-Label Learning with Label Enhancement
Arxiv
4+阅读 · 2019年4月16日
VIP会员
相关资讯
强化学习扫盲贴:从Q-learning到DQN
夕小瑶的卖萌屋
52+阅读 · 2019年10月13日
Hierarchically Structured Meta-learning
CreateAMind
26+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
28+阅读 · 2019年5月18日
强化学习的Unsupervised Meta-Learning
CreateAMind
17+阅读 · 2019年1月7日
已删除
将门创投
3+阅读 · 2018年10月11日
Hierarchical Imitation - Reinforcement Learning
CreateAMind
19+阅读 · 2018年5月25日
【学习】Hierarchical Softmax
机器学习研究会
4+阅读 · 2017年8月6日
强化学习 cartpole_a3c
CreateAMind
9+阅读 · 2017年7月21日
Top
微信扫码咨询专知VIP会员