深度强化学习实验室报道
来源:ICAPS2020:
作者:DeepRL
会议级别:顶级会议(做control和planning的经常会投ACC, CDC, ICAPS)
While AI Planning and Reinforcement Learning communities focus on similar sequential decision-making problems, these communities remain somewhat unaware of each other on specific problems, techniques, methodologies, and evaluation.
This workshop aims to encourage discussion and collaboration between the researchers in the fields of AI planning and reinforcement learning. We aim to bridge the gap between the two communities, facilitate the discussion of differences and similarities in existing techniques, and encourage collaboration across the fields. We solicit interest from AI researchers that work in the intersection of planning and reinforcement learning, in particular, those that focus on intelligent decision making. As such, the joint workshop program is an excellent opportunity to gather a large and diverse group of interested researchers.
The workshop solicits work at the intersection of the fields of reinforcement learning and planning. We also solicit work solely in one area that can influence advances in the other so long as the connections are clearly articulated in the submission.
Submissions are invited for topics on, but not limited to:
Reinforcement learning (model-based, Bayesian, deep, etc.)
Model representation and learning for planning
Planning using approximated/uncertain (learned) models
Monte Carlo planning
Learning search heuristics for planner guidance
Theoretical aspects of planning and reinforcement learning
Reinforcement Learning and planning competition(s)
Multi-agent planning and learning
Applications of both reinforcement learning and planning
Submission deadline: March 20th, 2020 (UTC-12 timezone)
Notification date: April 15th, 2020
Camera-ready deadline: May 15th, 2020
Workshop date: June 15 or 16 (TBD), 2020
We solicit workshop paper submissions relevant to the above call of the following types:
Long papers — up to 8 pages + unlimited references / appendices
Short papers — up to 4 pages + unlimited references / appendices
Extended abstracts — up to 2 pages + unlimited references / appendices
Please format submissions in AAAI style (see instructions in the Author Kit at AAAI, AuthorKit20.zip) and keep them to at most 9 pages including references. Authors considering submitting to the workshop papers rejected from other conferences, please ensure you do your utmost to address the comments given by the reviewers. Please do not submit papers that are already accepted for the main ICAPS conference to the workshop.
Some accepted long papers will be accepted as contributed talks. All accepted long and short papers and extended abstracts will be given a slot in the poster presentation session. Extended abstracts are intended as brief summaries of already published papers, preliminary work, position papers or challenges that might help bridge the gap.
As the main purpose of this workshop is to solicit discussion, the authors are invited to use the appendix of their submissions for that purpose.
Paper submissions should be made through EasyChair, https://easychair.org/conferences/?conf=prl2020.
第39篇:DQN系列(2): Double DQN 算法原理与实现
第38篇:DQN系列(1): Double Q-learning
第37篇:从Paper到Coding, 一览DRL挑战34类游戏
第36篇:复现"深度强化学习"论文的经验之谈
第35篇:α-Rank算法之DeepMind及Huawei的改进
第34篇:DeepMind-102页深度强化学习PPT(2019)
第31篇:强化学习,路在何方?
第30篇:强化学习的三种范例
第29篇:框架ES-MAML:进化策略的元学习方法
第28篇:138页“策略优化”PPT--Pieter Abbeel
第27篇:迁移学习在强化学习中的应用及最新进展
第26篇:深入理解Hindsight Experience Replay
第25篇:10项【深度强化学习】赛事汇总
第24篇:DRL实验中到底需要多少个随机种子?
第23篇:142页"ICML会议"强化学习笔记
第22篇:通过深度强化学习实现通用量子控制
第21篇:《深度强化学习》面试题汇总
第20篇:《深度强化学习》招聘汇总(13家企业)
第19篇:解决反馈稀疏问题之HER原理与代码实现
第17篇:AI Paper | 几个实用工具推荐
第16篇:AI领域:如何做优秀研究并写高水平论文?
第11期论文:2019-12-19(3篇,一篇OpennAI,一篇Nvidia)
第10期论文:2019-12-13(8篇)
第9期论文:2019-12-3(3篇)
第8期论文:2019-11-18(5篇)
第7期论文:2019-11-15(6篇)
第6期论文:2019-11-08(2篇)
第5期论文:2019-11-07(5篇,一篇DeepMind发表)
第4期论文:2019-11-05(4篇)
第3期论文:2019-11-04(6篇)
第2期论文:2019-11-03(3篇)
第1期论文:2019-11-02(5篇)