ICML-2020 || 126篇"强化学习"论文完整汇总

2020 年 6 月 7 日 深度强化学习实验室

深度强化学习实验室报道

来源:ICML2020

作者: RchalYang


ICML 2020放榜了。入选论文创新高,共有1088篇论文突出重围。然而,接收率却是一年比一年低,这次仅为21.8%(去年为22.6%,前年为24.9%)。从整个榜单上看,谷歌仍为最强实力机构,共有138篇收录(数据包含谷歌大脑、DeepMind)。加州大学伯克利分校:88篇,斯坦福:75篇, MIT:66篇,微软:53篇,Facebook:32篇,IBM:19篇,其中国内机构也表现不俗。尤其是一直以来作为主力的大学们。清华:36篇,北大:20篇,上交:16篇,其中强化学习占有率达到了:11.58%, 下面是强化学习领域论文


(1) My Fair Bandit: Distributed Learning of Max-Min Fairness with Multi-player Bandits

Ilai Bistritz (Stanford University),Tavor Z Baharav (Stanford University),Amir Leshem (Bar-Ilan University),Nicholas Bambos

(2) Generalization to New Actions in Reinforcement Learning

Ayush Jain (University of Southern California) · Andrew Szot (University of Southern California) · Joseph Lim (Univ. of Southern California)

(3) Generalized Neural Policies for Relational MDPs

Sankalp Garg (Indian Institute of Technology Delhi) · Aniket Bajpai (Indian Institute of Technology, Delhi) · Mausam (IIT Delhi)

(4) From Importance Sampling to Doubly Robust Policy Gradient

Jiawei Huang (University of Illinois at Urbana-Champaign) · Nan Jiang (University of Illinois at Urbana-Champaign)

(5) Kernel Methods for Cooperative Multi-Agent Learning with Delays

Abhimanyu Dubey (Massachusetts Institute of Technology) · Alex `Sandy' Pentland (MIT)

(6) Robust Multi-Agent Decision-Making with Heavy-Tailed Payoffs

Abhimanyu Dubey (Massachusetts Institute of Technology) · Alex `Sandy' Pentland (MIT)

(7) Learning the Valuations of a k-demand Agent

Hanrui Zhang (Duke University) · Vincent Conitzer (Duke)

(8) Improved Sleeping Bandits with Stochastic Action Sets and Adversarial Rewards

Aadirupa Saha (Indian Institute of Science (IISc), Bangalore) · Pierre Gaillard () · Michal Valko (DeepMind)

(9) Multi-Agent Determinantal Q-Learning

Yaodong Yang (Huawei Technology R&D UK) · Ying Wen (UCL) · Jun Wang (UCL) · Liheng Chen (Shanghai Jiao Tong University) · Kun Shao (Huawei Noah's Ark Lab) · David Mguni (Noah's Ark Laboratory, Huawei) · Weinan Zhang (Shanghai Jiao Tong University)

(10) Minimax Weight and Q-Function Learning for Off-Policy Evaluation

Masatoshi Uehara (Harvard University) · Jiawei Huang (University of Illinois at Urbana-Champaign) · Nan Jiang (University of Illinois at Urbana-Champaign)

(11) Learning Efficient Multi-agent Communication: An Information Bottleneck Approach

Rundong Wang (Nanyang Technological University) · Xu He (Nanyang Technological University) · Runsheng Yu (Nanyang Technological University) · Wei Qiu (Nanyang Technological University) · Bo An (Nanyang Technological University) · Zinovi Rabinovich (Nanyang Technological University)

(12) Multinomial Logit Bandit with Low Switching Cost

Kefan Dong (Tsinghua University) · Yingkai Li (Northwestern University) · Qin Zhang (Indiana University Bloomington) · Yuan Zhou (UIUC)

(13) Optimizing Data Usage via Differentiable Rewards

Xinyi Wang (Carnegie Mellon University) · Hieu Pham (Carnegie Mellon University) · Paul Michel (Carnegie Mellon University) · Antonios Anastasopoulos (Carnegie Mellon University) · Jaime Carbonell (Carnegie Mellon University) · Graham Neubig (Carnegie Mellon University)

(14) Optimistic Policy Optimization with Bandit Feedback

Lior Shani (Technion) · Yonathan Efroni (Technion) · Aviv Rosenberg (Tel Aviv University) · Shie Mannor (Technion)

(15) Learning Adversarial Markov Decision Processes with Bandit Feedback and Unknown Transition

Chi Jin (Princeton University) · Tiancheng Jin (University of Southern California) · Haipeng Luo (University of Southern California) · Suvrit Sra (MIT) · Tiancheng Yu (MIT)

(16) Asynchronous Coagent Networks

James Kostas (University of Massachusetts Amherst) · Chris Nota (University of Massachusetts Amherst) · Philip Thomas (University of Massachusetts Amherst)

(17) Understanding the Curse of Horizon in Off-Policy Evaluation via Conditional Importance Sampling

Yao Liu (Stanford University) · Pierre-Luc Bacon (Stanford University) · Emma Brunskill (Stanford University)

(18) Reinforcement Learning for Integer Programming: Learning to Cut

Yunhao Tang (Columbia University) · Shipra Agrawal (Columbia University) · Yuri Faenza (Columbia University)

(19) Safe Reinforcement Learning in Constrained Markov Decision Processes

Akifumi Wachi (IBM Research AI) · Yanan Sui (Tsinghua University)

(20) ROMA: Multi-Agent Reinforcement Learning with Emergent Roles

Tonghan Wang (Tsinghua University) · Heng Dong (Tsinghua) · Victor Lesser (UMASS) · Chongjie Zhang (Tsinghua University)

(21) Naive Exploration is Optimal for Online LQR

Max Simchowitz (UC Berkeley) · Dylan Foster (MIT)

(22) Implicit Generative Modeling for Efficient Exploration

Neale Ratzlaff (Oregon State University) · Qinxun Bai (Horizon Robotics) · Fuxin Li (Oregon State University) · Wei Xu (Horizon Robotics)

(23) Prediction-Guided Multi-Objective Reinforcement Learning for Continuous Robot Control

Jie Xu (Massachusetts Institute of Technology) · Yunsheng Tian (Massachusetts Institute of Technology) · Pingchuan Ma (MIT) · Daniela Rus (MIT CSAIL) · Shinjiro Sueda (Texas A&M University) · Wojciech Matusik (MIT)

(24) Double Reinforcement Learning for Efficient and Robust Off-Policy Evaluation

Nathan Kallus (Cornell University) · Masatoshi Uehara (Harvard University)

(25) Statistically Efficient Off-Policy Policy Gradients

Nathan Kallus (Cornell University) · Masatoshi Uehara (Harvard University)

(26) Off-Policy Actor-Critic with Shared Experience Replay

Simon Schmitt (DeepMind) · Matteo Hessel (Deep Mind) · Karen Simonyan (DeepMind)

(27) Policy Teaching via Environment Poisoning: Training-time Adversarial Attacks against Reinforcement Learning

Amin Rakhsha (MPI-SWS) · Goran Radanovic (Max Planck Institute for Software Systems) · Rati Devidze (Max Planck Institute for Software Systems) · Jerry Zhu (University of Wisconsin-Madison) · Adish Singla (Max Planck Institute (MPI-SWS))

(28) Does the Markov Decision Process Fit the Data: Testing for the Markov Property in Sequential Decision Making

Chengchun Shi (London School of Economics and Political Science) · Runzhe Wan (North Carolina State University) · Rui Song () · Wenbin Lu () · Ling Leng (Amazon)

(29) No-Regret Exploration in Goal-Oriented Reinforcement Learning

Jean Tarbouriech (Facebook AI Research Paris & Inria Lille) · Evrard Garcelon (Facebook AI Research ) · Michal Valko (DeepMind) · Matteo Pirotta (Facebook AI Research) · Alessandro Lazaric (Facebook AI Research)

(30) OPtions as REsponses: Grounding behavioural hierarchies in multi-agent reinforcement learning

Alexander Vezhnevets (DeepMind) · Yuhuai Wu (University of Toronto) · Maria Eckstein (UC Berkeley) · Rémi Leblond (DeepMind) · Joel Z Leibo (DeepMind)

(31) Reinforcement Learning for Molecular Design Guided by Quantum Mechanics

Gregor Simm (Cambridge University) · Robert Pinsler (University of Cambridge) · Jose Hernandez-Lobato (University of Cambridge)

(32) ConQUR: Mitigating Delusional Bias in Deep Q-Learning

DiJia Su (Princeton University) · Jayden Ooi (Google) · Tyler Lu (Google) · Dale Schuurmans (Google / University of Alberta) · Craig Boutilier (Google)

(33) Provably Efficient Exploration in Policy Optimization

Qi Cai (Northwestern University) · Zhuoran Yang (Princeton University) · Chi Jin (Princeton University) · Zhaoran Wang (Northwestern U)

(34) Striving for simplicity and performance in off-policy DRL: Output Normalization and Non-Uniform Sampling

Che Wang (New York University) · Yanqiu Wu (New York University) · Quan Vuong (University of California San Diego) · Keith Ross (New York University Shanghai)

(35) Converging to Team-Maxmin Equilibria in Zero-Sum Multiplayer Games

Youzhi Zhang (Nanyang Technological University) · Bo An (Nanyang Technological University)

(36) Explore, Discover and Learn: Unsupervised Discovery of State-Covering Skills

Victor Campos (Barcelona Supercomputing Center) · Alexander Trott (Salesforce Research) · Caiming Xiong (Salesforce) · Richard Socher (Salesforce) · Xavier Giro-i-Nieto (Universitat Politecnica de Catalunya) · Jordi Torres (Barcelona Supercomputing Center)

(37) Sparsified Linear Programming for Zero-Sum Equilibrium Finding

Brian Zhang (Carnegie Mellon University) · Tuomas Sandholm (Carnegie Mellon University)

(38) Extra-gradient with player sampling for faster convergence in n-player games

Samy Jelassi (Princeton University) · Carles Domingo-Enrich (NYU) · Damien Scieur (Samsung Advanced Institute of Technology AI Lab Montreal (SAIL)) · Arthur Mensch (ENS) · Joan Bruna (New York University)

(39) Entropy Minimization In Emergent Languages

Evgeny Kharitonov (FAIR) · Rahma Chaabouni (Facebook/ENS/INRIA) · Diane Bouchacourt (Facebook AI) · Marco Baroni (Facebook Artificial Intelligence Research)

(40) Discount Factor as a Regularizer in Reinforcement Learning

Ron Amit (Technion – Israel Institute of Technology) · Kamil Ciosek (Microsoft) · Ron Meir (Technion Israeli Institute of Technology)

(41) Domain Adaptive Imitation Learning

Kuno Kim (Stanford University) · Yihong Gu (Tsinghua University) · Jiaming Song (Stanford) · Shengjia Zhao (Stanford University) · Stefano Ermon (Stanford University)

(42) An Imitation Learning Approach for Cache Replacement

Evan Liu (Google) · Milad Hashemi (Google) · Kevin Swersky (Google Brain) · Parthasarathy Ranganathan (Google, USA) · Junwhan Ahn (Google)

(43) Breaking the Curse of Many Agents: Provable Mean Embedding Q-Iteration for Mean-Field Reinforcement Learning

Lingxiao Wang (Northwestern University) · Zhuoran Yang (Princeton University) · Zhaoran Wang (Northwestern U)

(44) Multi-Agent Routing Value Iteration Network

Quinlan Sykora (Uber ATG) · Mengye Ren (Uber ATG / University of Toronto) · Raquel Urtasun (Uber ATG)

(45) A Finite-Time Analysis of Q-Learning with Neural Network Function Approximation

Pan Xu (University of California, Los Angeles) · Quanquan Gu (University of California, Los Angeles)

(46) Information Particle Filter Tree: An Online Algorithm for POMDPs with Belief-Based Rewards on Continuous Domains

Johannes Fischer (Karlsruhe Institute of Technology (KIT)) · Ömer Sahin Tas (Karlsruhe Institute of Technology (KIT))

(47) Beyond UCB: Optimal and Efficient Contextual Bandits with Regression Oracles

Dylan Foster (MIT) · Alexander Rakhlin (MIT)

(48) Exploration Through Bias: Revisiting Biased Maximum Likelihood Estimation in Stochastic Multi-Armed Bandits

Xi Liu (Texas A&M University) · Ping-Chun Hsieh (National Chiao Tung University) · Yu Heng Hung (NCTU) · Anirban Bhattacharya (Texas A&M University) · P. Kumar (Texas A&M University)

(49) Adaptive Estimator Selection for Off-Policy Evaluation

Yi Su (Cornell University) · Pavithra Srinath (Microsoft Research) · Akshay Krishnamurthy (Microsoft Research)

(50) Linear bandits with Stochastic Delayed Feedback

Claire Vernade (DeepMind) · Alexandra Carpentier (Otto-von-Guericke University) · Tor Lattimore (DeepMind) · Giovanni Zappella (Amazon) · Beyza Ermis (Amazon Research) · Michael Brueckner (Amazon Research Berlin)

(51) Momentum-Based Policy Gradient Methods

Feihu Huang (University of Pittsburgh) · Shangqian Gao (University of Pittsburgh) · Jian Pei (Simon Fraser University) · Heng Huang (University of Pittsburgh)

(52) Control Frequency Adaptation via Action Persistence in Batch Reinforcement Learning

Alberto Maria Metelli (Politecnico di Milano) · Flavio Mazzolini (Politecnico di Milano) · Lorenzo Bisi (Politecnico di Milano) · Luca Sabbioni (Politecnico di Milano) · Marcello Restelli (Politecnico di Milano)

(53) What Can Learned Intrinsic Rewards Capture?

Zeyu Zheng (University of Michigan) · Junhyuk Oh (DeepMind) · Matteo Hessel (Deep Mind) · Zhongwen Xu (DeepMind) · Manuel Kroiss (DeepMind) · Hado van Hasselt (DeepMind) · David Silver (Google DeepMind) · Satinder Singh (DeepMind)

(54) Reinforcement Learning with Differential Privacy

Giuseppe Vietri (University of Minnesota) · Borja de Balle Pigem (Amazon Research) · Steven Wu (University of Minnesota) · Akshay Krishnamurthy (Microsoft Research)

(55) Improved Optimistic Algorithms for Logistic Bandits

Louis Faury (Criteo) · Marc Abeille (Criteo) · Clement Calauzenes (Criteo) · Olivier Fercoq (Telecom Paris)

(56) Growing Action Spaces

Gregory Farquhar (University of Oxford) · Laura Gustafson (Facebook AI Research) · Zeming Lin (Facebook AI Reseach) · Shimon Whiteson (Oxford University) · Nicolas Usunier (Facebook AI Research) · Gabriel Synnaeve (Facebook AI Research)

(57) Responsive Safety in Reinforcement Learning

Adam Stooke (UC Berkeley) · Joshua Achiam (OpenAI) · Pieter Abbeel (UC Berkeley & Covariant)

(58) Stabilizing Transformers for Reinforcement Learning

Emilio Parisotto (Carnegie Mellon University) · Francis Song (DeepMind) · Jack Rae (DeepMind) · Razvan Pascanu (DeepMind) · Caglar Gulcehre (DeepMind) · Siddhant Jayakumar (DeepMind) · Max Jaderberg (DeepMind) · Raphael Lopez Kaufman (Deepmind) · Aidan Clark (DeepMind) · Seb Noury (DeepMind) · Matthew Botvinick (DeepMind) · Nicolas Heess (DeepMind) · Raia Hadsell (DeepMind)

(59) Learning to Score Behaviors for Guided Policy Optimization

Aldo Pacchiano (UC Berkeley) · Jack Parker-Holder (University of Oxford) · Yunhao Tang (Columbia University) · Krzysztof Choromanski (Google) · Anna Choromanska (NYU Tandon School of Engineering) · Michael Jordan (UC Berkeley)

(60) Neural Contextual Bandits with UCB-based Exploration

Dongruo Zhou (UCLA) · Lihong Li (Google Research) · Quanquan Gu (University of California, Los Angeles)

(61) Distributionally Robust Policy Evaluation and Learning in Offline Contextual Bandits

Nian Si (Stanford University) · Fan Zhang (Stanford University) · Zhengyuan Zhou (Stanford University) · Jose Blanchet (Stanford University)

(62) Efficient Policy Learning from Surrogate-Loss Classification Reductions

Andrew Bennett (Cornell University) · Nathan Kallus (Cornell University)

(63) Learning Robot Skills with Temporal Variational Inference

Tanmay Shankar (Facebook AI Research) · Abhinav Gupta (Carnegie Mellon University)

(64) Leveraging Procedural Generation to Benchmark Reinforcement Learning

Karl Cobbe (OpenAI) · Chris Hesse (OpenAI) · Jacob Hilton (OpenAI) · John Schulman (OpenAI)

(65) What can I do here? A Theory of Affordances in Reinforcement Learning

Khimya Khetarpal (McGill University, Mila Montreal) · Zafarali Ahmed (DeepMind) · Gheorghe Comanici (DeepMind) · David Abel (Brown University) · Doina Precup (DeepMind)

(66) Data Valuation using Reinforcement Learning

Jinsung Yoon (University of California, Los Angeles) · Sercan O. Arik (Google) · Tomas Pfister (Google)

(67) Reward-Free Exploration for Reinforcement Learning

Chi Jin (Princeton University) · Akshay Krishnamurthy (Microsoft Research) · Max Simchowitz (UC Berkeley) · Tiancheng Yu (MIT )

(68) Designing Optimal Dynamic Treatment Regimes: A Causal Reinforcement Learning Approach

Junzhe Zhang (Columbia University)

(69) Lookahead-Bounded Q-learning

Ibrahim El Shar (University of Pittsburgh) · Daniel Jiang (University of Pittsburgh)

(70) Evaluating the Performance of Reinforcement Learning Algorithms

Scott Jordan (University of Massachusetts Amherst) · Yash Chandak (University of Massachusetts Amherst) · Daniel Cohen (University of Massachusetts Amherst) · Mengxue Zhang (umass Amherst ) · Philip Thomas (University of Massachusetts Amherst)

(71) Provable Self-Play Algorithms for Competitive Reinforcement Learning

Yu Bai (Salesforce Research) · Chi Jin (Princeton University)

(72) A Game Theoretic Perspective on Model-Based Reinforcement Learning

Aravind Rajeswaran (University of Washington) · Igor Mordatch (OpenAI) · Vikash Kumar (Google)

(73) Optimizing for the Future in Non-Stationary MDPs

Yash Chandak (University of Massachusetts Amherst) · Georgios Theocharous (Adobe Research) · Shiv Shankar (University of Massachusetts) · Martha White (University of Alberta) · Sridhar Mahadevan (Adobe Research) · Philip Thomas (University of Massachusetts Amherst)

(74) Adaptive Droplet Routing in Digital Microfluidic Biochips Using Deep Reinforcement Learning

Tung-Che Liang (Duke University) · Zhanwei Zhong (Duke University) · Yaas Bigdeli (Duke Univsersity) · Tsung-Yi Ho (National Tsing Hua University) · Richard Fair (Duke University) · Krishnendu Chakrabarty (Duke University)

(75) Sample Factory: Egocentric 3D Control from Pixels at 100000 FPS with Asynchronous Reinforcement Learning

Aleksei Petrenko (University of Southern California) · Zhehui Huang (University of Southern California) · Tushar Kumar (University of Southern California) · Gaurav Sukhatme (University of Southern California) · Vladlen Koltun (Intel Labs)

(76) Q-value Path Decomposition for Deep Multiagent Reinforcement Learning

Yaodong Yang (Tianjin University) · Jianye Hao (Tianjin University) · Guangyong Chen (Tencent) · Hongyao Tang (Tianjin University) · Yingfeng Chen (NetEase Fuxi AI Lab) · Yujing Hu (NetEase Fuxi AI Lab) · Changjie Fan (Netease) · Zhongyu Wei (Fudan University)

(77) Finite-Time Last-Iterate Convergence for Multi-Agent Learning in Games

Tianyi Lin (UC Berkeley) · Zhengyuan Zhou (Stanford University) · Panayotis Mertikopoulos (CNRS) · Michael Jordan (UC Berkeley)

(78) When Demands Evolve Larger and Noisier: Learning and Earning in a Growing Environment

Feng Zhu (Peking University) · Zeyu Zheng (UC Berkeley)

(79) Context-aware Dynamics Model for Generalization in Model-Based Reinforcement Learning

Kimin Lee (UC Berkeley) · Younggyo Seo (KAIST) · Seunghyun Lee (KAIST) · Honglak Lee (Google / U. Michigan) · Jinwoo Shin (KAIST)

(80) Structured Policy Iteration for Linear Quadratic Regulator

Youngsuk Park (Stanford University) · Ryan Rossi (Adobe Research) · Zheng Wen (DeepMind) · Gang Wu (Adobe Research) · Handong Zhao (Adobe Research)

(81) Monte-Carlo Tree Search as Regularized Policy Optimization

Jean-Bastien Grill (DeepMind) · Florent Altché (DeepMind) · Yunhao Tang (Columbia University) · Thomas Hubert (DeepMind) · Michal Valko (DeepMind) · Ioannis Antonoglou (Deepmind) · Remi Munos (DeepMind)

(82) On the Expressivity of Neural Networks for Deep Reinforcement Learning

Kefan Dong (Tsinghua University) · Yuping Luo (Princeton University) · Tianhe Yu (Stanford University) · Chelsea Finn (Stanford) · Tengyu Ma (Stanford)

(83) Intrinsic Reward Driven Imitation Learning via Generative Model

Xingrui Yu (University of Technology Sydney) · Yueming LYU (University of Technology Sydney) · Ivor Tsang (University of Technology Sydney)

(84) Can Increasing Input Dimensionality Improve Deep Reinforcement Learning?

Kei Ota (Mitsubishi Electric Corporation) · Tomoaki Oiki (Mitsubishi Electric) · Devesh Jha (Mitsubishi Electric Research Labs) · Toshisada Mariyama (Mitsubishi Electric) · Daniel Nikovski (Mitsubishi Electric Research Labs)

(85) Batch Reinforcement Learning with Hyperparameter Gradients

Byung-Jun Lee (KAIST) · Jongmin Lee (KAIST) · Peter Vrancx (PROWLER.io) · Dongho Kim (Prowler.io) · Kee-Eung Kim (KAIST)

(86) Sub-Goal Trees--a Framework for Goal-Based Reinforcement Learning

Tom Jurgenson (Technion) · Or Avner (Technion) · Edward Groshev (Osaro, Inc.) · Aviv Tamar (Technion)

(87) Agent57: Outperforming the Atari Human Benchmark

Adrià Puigdomenech Badia (Deepmind) · Bilal Piot (DeepMind) · Steven Kapturowski (Deepmind) · Pablo Sprechmann (Google DeepMind) · Oleksandr Vitvitskyi (DeepMind) · Zhaohan Guo (DeepMind) · Charles Blundell (DeepMind)

(88) Stochastically Dominant Distributional Reinforcement Learning

John Martin (Stevens Institute of Technology) · Michal Lyskawinski (Stevens Institute of Technology) · Xiaohu Li (Stevens Institute of Technology) · Brendan Englot (Stevens Institute of Technology)

(89) Gradient-free Online Learning in Continuous Games with Delayed Rewards

Amélie Héliou (Criteo) · Panayotis Mertikopoulos (CNRS) · Zhengyuan Zhou (Stanford University)

(90) Fast Adaptation to New Environments via Policy-Dynamics Value Functions

Roberta Raileanu (NYU) · Max Goldstein (NYU) · Arthur Szlam (Facebook) · Facebook Rob Fergus (Facebook AI Research, NYU)

(91) A Markov Decision Process Model for Socio-Economic Systems Impacted by Climate Change

Salman Sadiq Shuvo (University of South Florida) · Yasin Yilmaz (University of South Florida) · Alan Bush (University of South Florida) · Mark Hafen (University of South Florida)

(92) Fast computation of Nash Equilibria in Imperfect Information Games

Remi Munos (DeepMind) · Julien Perolat (DeepMind) · Jean-Baptiste Lespiau (DeepMind) · Mark Rowland (DeepMind) · Bart De Vylder (DeepMind) · Marc Lanctot (DeepMind) · Finbarr Timbers (DeepMind) · Daniel Hennes (DeepMind) · Shayegan Omidshafiei (DeepMind) · Audrunas Gruslys (DeepMind) · Mohammad Gheshlaghi Azar (Deepmind) · Edward Lockhart (DeepMind) · Karl Tuyls (DeepMind)

(93) Inverse Active Sensing: Modeling and Understanding Timely Decision-Making

Daniel Jarrett (University of Cambridge) · Mihaela van der Schaar (University of Cambridge)

(94) Tightening Exploration in Upper Confidence Reinforcement Learning

Hippolyte Bourel (ENS Rennes) · Odalric-Ambrym Maillard (Inria Lille - Nord Europe) · Mohammad Sadegh Talebi (University of Copenhagen)

(95) Bootstrap Latent-Predictive Representations for Multitask Reinforcement Learning

Zhaohan Guo (DeepMind) · Bernardo Avila Pires (DeepMind) · Mohammad Gheshlaghi Azar (Deepmind) · Bilal Piot (DeepMind) · Florent Altché (DeepMind) · Jean-Bastien Grill (DeepMind) · Remi Munos (DeepMind)

(96) Invariant Causal Prediction for Block MDPs

Clare Lyle (University of Oxford) · Amy Zhang (McGill University) · Angelos Filos (University of Oxford) · Shagun Sodhani (Facebook AI Research) · Marta Kwiatkowska (Oxford University) · Yarin Gal (University of Oxford) · Doina Precup (McGill University / DeepMind) · Joelle Pineau (McGill University / Facebook)

(97) Deep Reinforcement Learning with Smooth Policy

Qianli Shen (Peking University) · Yan Li (Georgia Tech) · Haoming Jiang (Georgia Tech) · Zhaoran Wang (Northwestern) · Tuo Zhao (Gatech)

(98) Model-free Reinforcement Learning in Infinite-horizon Average-reward Markov Decision Processes

Chen-Yu Wei (University of Southern California) · Mehdi Jafarnia (University of Southern California) · Haipeng Luo (University of Southern California) · Hiteshi Sharma (University of Southern California) · Rahul Jain (USC)

(99) Kinematic State Abstraction and Provably Efficient Rich-Observation Reinforcement Learning

Dipendra Misra (Microsoft) · Mikael Henaff (Microsoft) · Akshay Krishnamurthy (Microsoft Research) · John Langford (Microsoft Research)

(100) Minimax-Optimal Off-Policy Evaluation with Linear Function Approximation

Yaqi Duan (Princeton University) · Zeyu Jia (Peking University) · Mengdi Wang (Princeton University)

(101) Enhanced POET: Open-ended Reinforcement Learning through Unbounded Invention of Learning Challenges and their Solutions

Rui Wang (Uber AI) · Joel Lehman () · Aditya Rawal (Uber AI Labs) · Jiale Zhi (Uber AI) · Yulun Li (Uber AI) · Jeffrey Clune (Open AI) · Kenneth Stanley (Uber AI and University of Central Florida)

(102) Adaptive Reward-Poisoning Attacks against Reinforcement Learning

Xuezhou Zhang (UW-Madison) · Yuzhe Ma (Univ. of Wisconsin-Madison) · Adish Singla (Max Planck Institute (MPI-SWS)) · Jerry Zhu (University of Wisconsin-Madison)

(103) Estimation of Bounds on Potential Outcomes For Decision Making

Maggie Makar (MIT) · Fredrik Johansson (Chalmers University of Technology) · John Guttag (MIT) · David Sontag (Massachusetts Institute of Technology)

(104) Provably Efficient Model-based Policy Adaptation

Yuda Song (University of California, San Diego) · Aditi Mavalankar (University of California San Diego) · Wen Sun (Microsoft Research) · Sicun Gao (University of California, San Diego)

(105) Stochastic Regret Minimization in Extensive-Form Games

Gabriele Farina (Carnegie Mellon University) · Christian Kroer (Columbia University) · Tuomas Sandholm (Carnegie Mellon University)

(106) Maximum Entropy Gain Exploration for Long Horizon Multi-goal Reinforcement Learning

Silviu Pitis (University of Toronto) · Harris Chan (University of Toronto, Vector Institute) · Stephen Zhao (University of Toronto) · Bradly Stadie (Vector Institute) · Jimmy Ba (University of Toronto)

(107) Cautious Adaptation For Reinforcement Learning in Safety-Critical Settings

Jesse Zhang (UC Berkeley) · Brian Cheung (UC Berkeley) · Chelsea Finn (Stanford) · Sergey Levine (UC Berkeley) · Dinesh Jayaraman (University of Pennsylvania)

(108) An Optimistic Perspective on Offline Deep Reinforcement Learning

Rishabh Agarwal (Google Research, Brain Team) · Dale Schuurmans (Google / University of Alberta) · Mohammad Norouzi (Google Brain)

(109) Learning with Good Feature Representations in Bandits and in RL with a Generative Model

Gellért Weisz (DeepMind) · Tor Lattimore (DeepMind) · Csaba Szepesvari (DeepMind/University of Alberta)

(110) Representations for Stable Off-Policy Reinforcement Learning

Dibya Ghosh (Google) · Marc Bellemare (Google Brain)

(111) Accountable Off-Policy Evaluation via a Kernelized Bellman Statistics

Yihao Feng (The University of Texas at Austin) · Tongzheng Ren (UT Austin) · Ziyang Tang (University of Texas at Austin) · Qiang Liu (UT Austin)

(112) Multi-Step Greedy Reinforcement Learning Algorithms

Manan Tomar (Indian Institute of Technology, Madras) · Yonathan Efroni (Technion) · Mohammad Ghavamzadeh (Facebook AI Research)

(113) On the Global Convergence Rates of Softmax Policy Gradient Methods

Jincheng Mei (Google / University of Alberta) · Chenjun Xiao (Google / University of Alberta) · Csaba Szepesvari (DeepMind/University of Alberta) · Dale Schuurmans (University of Alberta)

(114) Estimating Q(s,s') with Deterministic Dynamics Gradients

Ashley Edwards (Uber AI) · Himanshu Sahni (Georgia Institute of Technology) · Rosanne Liu (Deep Collective) · Jane Hung (Uber) · Ankit Jain (Uber AI Labs) · Rui Wang (Uber AI) · Adrien Ecoffet (Uber AI) · Thomas Miconi (Uber AI Labs) · Charles Isbell (Georgia Institute of Technology) · Jason Yosinski (Uber Labs)

(115) Interpretable Off-Policy Evaluation in Reinforcement Learning by Highlighting Influential Transitions

Omer Gottesman (Harvard University) · Joseph Futoma (Harvard University) · Yao Liu (Stanford University) · Sonali Parbhoo (Harvard University) · Leo Celi (MIT) · Emma Brunskill (Stanford University) · Finale Doshi-Velez (Harvard University)

(116) CURL: Contrastive Unsupervised Representation Learning for Reinforcement Learning

Michael Laskin (UC Berkeley) · Pieter Abbeel (UC Berkeley & Covariant) · Aravind Srinivas (UC Berkeley)

(117) Generative Pretraining From Pixels

Mark Chen (OpenAI) · Alec Radford (OpenAI) · Rewon Child (OpenAI) · Jeffrey K Wu (OpenAI) · Heewoo Jun (OpenAI) · David Luan (OpenAI) · Ilya Sutskever (OpenAI)

(118) R2-B2: Recursive Reasoning-Based Bayesian Optimization for No-Regret Learning in Games

Zhongxiang Dai (National University of Singapore) · Yizhou Chen (National University of Singapore) · Bryan Kian Hsiang Low (National University of Singapore) · Patrick Jaillet (MIT) · Teck-Hua Ho (National University of Singapore)

(119) Revisiting Fundamentals of Experience Replay

William Fedus (University of Montreal/Google Brain) · Prajit Ramachandran (Google) · Rishabh Agarwal (Google Research, Brain Team) · Yoshua Bengio (Mila / U. Montreal) · Hugo Larochelle (Google Brain) · Mark Rowland (DeepMind) · Will Dabney (DeepMind)

(120) Decision Trees for Decision-Making under the Predict-then-Optimize Framework

Adam Elmachtoub (Columbia University) · Jason Cheuk Nam Liang (MIT) · Ryan McNellis (Amazon)

(121) Learning to Navigate in Synthetically Accessible Chemical Space Using Reinforcement Learning

Sai Krishna Gottipati (99andBeyond) · Boris Sattarov (99andBeyond) · Sufeng Niu (Linkedin) · Haoran Wei (University of Delaware) · Yashaswi Pathak (International Institute of Information Technology,Hyderabad) · Shengchao Liu (MILA-UdeM) · Shengchao Liu (Mila, Université de Montréal) · Simon Blackburn (Mila) · Karam Thomas (99andBeyond) · Connor Coley (MIT) · Jian Tang (HEC Montreal & MILA) · Sarath Chandar (Mila / École Polytechnique de Montréal) · Yoshua Bengio (Mila / U. Montreal)

(122) Flexible and Efficient Long-Range Planning Through Curious Exploration

Aidan Curtis (Rice University) · Minjian Xin (Shanghai Jiao Tong University) · Dilip Arumugam (Stanford University) · Kevin Feigelis (Stanford University) · Daniel Yamins (Stanford University)

(123) Predictive Coding for Locally-Linear Control

Rui Shu (Stanford University) · Tung Nguyen (VinAI Research) · Yinlam Chow (Google) · Tuan Pham (VinAI) · Khoat Than (VinAI & HUST) · Mohammad Ghavamzadeh (Facebook) · Stefano Ermon (Stanford University) · Hung Bui (VinAI Research)

(124) Bidirectional Model-based Policy Optimization

Hang Lai (Shanghai Jiao Tong University) · Jian Shen (Shanghai Jiao Tong University) · Weinan Zhang (Shanghai Jiao Tong University) · Yong Yu (Shanghai Jiao Tong University)

(125) Efficiently Solving MDPs with Stochastic Mirror Descent

Yujia Jin (Stanford University) · Aaron Sidford (Stanford)

(126) A distributional view on multi objective policy optimization

Abbas Abdolmaleki (Google DeepMind) · Sandy Huang (DeepMind) · Leonard Hasenclever (DeepMind) · Michael Neunert (Google DeepMind) · Martina Zambelli (DeepMind) · Murilo Martins (DeepMind) · Francis Song (DeepMind) · Nicolas Heess (DeepMind) · Raia Hadsell (DeepMind) · Martin Riedmiller (DeepMind)




总结1:周志华 || AI领域如何做研究-写高水平论文

总结2:全网首发最全深度强化学习资料(永更)

总结3:  《强化学习导论》代码/习题答案大全

总结4:30+个必知的《人工智能》会议清单

总结52019年-57篇深度强化学习文章汇总

总结6:  万字总结 || 强化学习之路

总结7:万字总结 || 多智能体强化学习(MARL)大总结

总结8:经验 || 深度强化学习理论、模型及编码调参技巧


第66篇:分布式强化学习框架Acme,并行性加强

第65篇:DQN系列(3): 优先级经验回放(PER)

第64篇:UC Berkeley开源RAD来改进强化学习算法

第63篇:华为诺亚方舟招聘 || 强化学习研究实习生

第62篇:ICLR2020- 106篇深度强化学习顶会论文

第61篇:David Sliver 亲自讲解AlphaGo、Zero

第60篇:滴滴主办强化学习挑战赛:KDD Cup-2020

第59篇:Agent57在所有经典Atari 游戏中吊打人类

第58篇:清华开源「天授」强化学习平台

第57篇:Google发布"强化学习"框架"SEED RL"

第56篇:RL教父Sutton实现强人工智能算法的难易

第55篇:内推 ||  阿里2020年强化学习实习生招聘

第54篇:顶会 || 65篇"IJCAI"深度强化学习论文

第53篇:TRPO/PPO提出者John Schulman谈科研

第52篇:《强化学习》可复现性和稳健性,如何解决?

第51篇:强化学习和最优控制的《十个关键点》

第50篇:微软全球深度强化学习开源项目开放申请

第49篇:DeepMind发布强化学习库 RLax

第48篇:AlphaStar过程详解笔记

第47篇:Exploration-Exploitation难题解决方法

第46篇:DQN系列(2): Double DQN 算法

第45篇:DQN系列(1): Double Q-learning

第44篇:科研界最全工具汇总

第43篇:起死回生|| 如何rebuttal顶会学术论文?

第42篇:深度强化学习入门到精通资料综述

第41篇:顶会征稿 ||  ICAPS2020: DeepRL

第40篇:实习生招聘 || 华为诺亚方舟实验室

第39篇:滴滴实习生|| 深度强化学习方向

第38篇:AAAI-2020 || 52篇深度强化学习论文

第37篇:Call For Papers# IJCNN2020-DeepRL

第36篇:复现"深度强化学习"论文的经验之谈

第35篇:α-Rank算法之DeepMind及Huawei改进

第34篇:从Paper到Coding, DRL挑战34类游戏

第33篇:DeepMind-102页深度强化学习PPT

第32篇:腾讯AI Lab强化学习招聘(正式/实习)

第31篇:强化学习,路在何方?

第30篇:强化学习的三种范例

第29篇:框架ES-MAML:进化策略的元学习方法

第28篇:138页“策略优化”PPT--Pieter Abbeel

第27篇:迁移学习在强化学习中的应用及最新进展

第26篇:深入理解Hindsight Experience Replay

第25篇:10项【深度强化学习】赛事汇总

第24篇:DRL实验中到底需要多少个随机种子?

第23篇:142页"ICML会议"强化学习笔记

第22篇:通过深度强化学习实现通用量子控制

第21篇:《深度强化学习》面试题汇总

第20篇:《深度强化学习》招聘汇总(13家企业)

第19篇:解决反馈稀疏问题之HER原理与代码实现

第18篇:"DeepRacer" —顶级深度强化学习挑战赛

第17篇:AI Paper | 几个实用工具推荐

第16篇:AI领域:如何做优秀研究并写高水平论文?

第15篇: DeepMind开源三大新框架!
第14篇: 61篇NIPS2019DeepRL论文及部分解读
第13篇: OpenSpiel(28种DRL环境+24种DRL算法)
第12篇: 模块化和快速原型设计Huskarl DRL框架
第11篇: DRL在Unity自行车环境中配置与实践
第10篇: 解读72篇DeepMind深度强化学习论文
第9篇: 《AutoML》:一份自动化调参的指导
第8篇: ReinforceJS库(动态展示DP、TD、DQN)
第7篇: 10年NIPS顶会DRL论文(100多篇)汇总
第6篇: ICML2019-深度强化学习文章汇总
第5篇: 深度强化学习在阿里巴巴的技术演进
第4篇: 深度强化学习十大原则
第3篇: “超参数”自动化设置方法---DeepHyper
第2篇: 深度强化学习的加速方法
第1篇: 深入浅出解读"多巴胺(Dopamine)论文"、环境配置和实例分析


第14期论文:  2020-02-10(8篇)

第13期论文:2020-1-21(共7篇)

第12期论文:2020-1-10(Pieter Abbeel一篇,共6篇)

第11期论文:2019-12-19(3篇,一篇OpennAI)

第10期论文:2019-12-13(8篇)

第9期论文:2019-12-3(3篇)

第8期论文:2019-11-18(5篇)

第7期论文:2019-11-15(6篇)

第6期论文:2019-11-08(2篇)

第5期论文:2019-11-07(5篇,一篇DeepMind发表)

第4期论文:2019-11-05(4篇)

第3期论文:2019-11-04(6篇)

第2期论文:2019-11-03(3篇)

第1期论文:2019-11-02(5篇)




登录查看更多
0

相关内容

近期必读的5篇AI顶会CVPR 2020 GNN (图神经网络) 相关论文
专知会员服务
78+阅读 · 2020年3月3日
抢鲜看!13篇CVPR2020论文链接/开源代码/解读
专知会员服务
49+阅读 · 2020年2月26日
深度强化学习策略梯度教程,53页ppt
专知会员服务
178+阅读 · 2020年2月1日
【强化学习资源集合】Awesome Reinforcement Learning
专知会员服务
93+阅读 · 2019年12月23日
强化学习最新教程,17页pdf
专知会员服务
174+阅读 · 2019年10月11日
MIT新书《强化学习与最优控制》
专知会员服务
275+阅读 · 2019年10月9日
NeurIPS2019机器学习顶会接受论文列表!
专知
28+阅读 · 2019年9月6日
强化学习三篇论文 避免遗忘等
CreateAMind
19+阅读 · 2019年5月24日
ICML2019机器学习顶会接受论文列表!
专知
10+阅读 · 2019年5月12日
强化学习的Unsupervised Meta-Learning
CreateAMind
17+阅读 · 2019年1月7日
RL 真经
CreateAMind
5+阅读 · 2018年12月28日
OpenAI丨深度强化学习关键论文列表
中国人工智能学会
17+阅读 · 2018年11月10日
【OpenAI】深度强化学习关键论文列表
专知
11+阅读 · 2018年11月10日
【NIPS2018】接收论文列表
专知
5+阅读 · 2018年9月10日
人工智能领域顶会IJCAI 2018 接受论文列表
专知
5+阅读 · 2018年5月16日
强化学习族谱
CreateAMind
26+阅读 · 2017年8月2日
Meta-Learning with Implicit Gradients
Arxiv
13+阅读 · 2019年9月10日
Arxiv
5+阅读 · 2018年6月12日
Arxiv
4+阅读 · 2018年1月15日
VIP会员
相关VIP内容
近期必读的5篇AI顶会CVPR 2020 GNN (图神经网络) 相关论文
专知会员服务
78+阅读 · 2020年3月3日
抢鲜看!13篇CVPR2020论文链接/开源代码/解读
专知会员服务
49+阅读 · 2020年2月26日
深度强化学习策略梯度教程,53页ppt
专知会员服务
178+阅读 · 2020年2月1日
【强化学习资源集合】Awesome Reinforcement Learning
专知会员服务
93+阅读 · 2019年12月23日
强化学习最新教程,17页pdf
专知会员服务
174+阅读 · 2019年10月11日
MIT新书《强化学习与最优控制》
专知会员服务
275+阅读 · 2019年10月9日
相关资讯
NeurIPS2019机器学习顶会接受论文列表!
专知
28+阅读 · 2019年9月6日
强化学习三篇论文 避免遗忘等
CreateAMind
19+阅读 · 2019年5月24日
ICML2019机器学习顶会接受论文列表!
专知
10+阅读 · 2019年5月12日
强化学习的Unsupervised Meta-Learning
CreateAMind
17+阅读 · 2019年1月7日
RL 真经
CreateAMind
5+阅读 · 2018年12月28日
OpenAI丨深度强化学习关键论文列表
中国人工智能学会
17+阅读 · 2018年11月10日
【OpenAI】深度强化学习关键论文列表
专知
11+阅读 · 2018年11月10日
【NIPS2018】接收论文列表
专知
5+阅读 · 2018年9月10日
人工智能领域顶会IJCAI 2018 接受论文列表
专知
5+阅读 · 2018年5月16日
强化学习族谱
CreateAMind
26+阅读 · 2017年8月2日
相关论文
Top
微信扫码咨询专知VIP会员