ICML2017最佳论文得主专访,DeepMind、谷歌、微软论文PK

2017 年 8 月 7 日 引力空间站

机器学习顶会 ICML 2017 开幕,最佳论文和经典论文奖项公布。本文将介绍获奖论文,并从接收论文里看本届会议谷歌、DeepMind、微软各自的特色。具体说,强化学习和神经网络依然是各家的关注点。DeepMind 19篇,谷歌38篇,微软39篇。DeepMind基本不与外人合作,微软基本全都是合作结果。


最佳论文


《通过影响函数理解黑箱预测》(Understanding Black-box Predictions via Influence Functions

作者:Pang Wei Koh, Percy Liang



摘要


我们如何解释黑箱模型的预测结果?在本论文中,我们使用影响函数(稳健统计学的一种经典技术),通过学习算法来跟踪模型的预测,并返回训练数据。这样,我们能够确定对于给定预测的一个影响最大的训练数据点。为了将影响函数扩展到现代机器学习设置中,我们开发了一个简单、高效的实现,它只需要对梯度和 Hessian-vector 内积进行 oracle 访问。我们的研究显示,即使在理论失效的非凸和不可微模型上,影响函数的近似值仍然可以提供有价值的信息。在线性模型和卷积神经网络中,我们证明影响函数对多个目的有用:理解模型行为,调试模型,检测数据集错误,甚至创建视觉上不可区分的攻击训练集。


Honorable Mentions


Lost Relatives of the Gumbel Trick
Matej Balog, Nilesh Tripuraneni, Zoubin Ghahramani, Adrian Weller


Modular Multitask Reinforcement Learning with Policy Sketches
Jacob Andreas, Dan Klein, Sergey Levine


A Unified Maximum Likelihood Approach for Estimating Symmetric Properties of Discrete Distributions
Jayadev Acharya, Hirakendu Das, Alon Orlitsky, Ananda Suresh


经典论文奖(Test of Time Award)


《在UCT算法中结合在线和离线知识》(Combining Online and Offline Knowledge in UCT

作者:Sylvain Gelly and David Silver



摘要


UCT算法使用基于样本的搜索在线学习价值函数。T D(λ)算法可以离线学习用于策略分布的价值函数。我们考虑了在UCT算法中结合离线和在线价值函数的三种方法。第一种,在蒙特卡洛模拟中,离线价值函数用作默认策略。第二种,UCT价值函数与在线评估动作值结合。第三种,离线值函数用作UCT搜索树中的先验知识。我们在9×9 围棋对GnuGo 3.7.10中评估这些算法。第一种算法优于使用随机模拟策略的UCT算法,但令人惊讶的是,使用较差的人工模拟策略的UCT算法性能更差。第二种算法性能优于UCT算法。第三种算法优于使用人工先验知识的UCT算法。我们在MoGo(世界上最强的9×9 Go围棋程序)中结合使用这些算法,每项技术都显着提高了MoGo的水平。


Honorable Mentions


Pegasos: Primal estimated sub-gradient solver for SVM
Shai Shalev-Shwartz, Yoram Singer, Nathan Srebro


A Bound on the Label Complexity of Agnostic Active Learning
Steve Hanneke


ICML 2017 最佳论文奖得主 Percy Liang 专访


今年 ICML 最佳论文被授予了斯坦福大学的研究《通过影响函数理解黑箱预测》。其中的一位作者 Percy Liang,是斯坦福大学计算机系助理教授、斯坦福人工智能实验室成员。实际上 Liang 博士今年 ICML 共有 4 篇论文被接收。去年,Liang 博士因在自然语言处理方面的杰出表现和建立更好的机器学习模型而被授予了 IJCAI-16 计算机和思想奖(Computers and Thought Award),该奖项专门奖励人工智能领域杰出的青年科学家。


下面是此前 Liang 博士接受专访的节选,让我们来看看这位新星的观点:


你在研究过程中遇到过的最大灵感或突破是什么(what is your biggest eureka moment in research)?


Liang:我的回答可能不算是答案,但是事情就是如此。你不可能某一天坐在苹果树下或是浴缸里,然后灵感就一闪而过,所有事情就想清楚了。做研究是一个渐进的过程。你要尝试很多事情,每一件事都回给你多一点的见解。当你终于证明一个定理或者证明某个实验可行,你感到非常激动,但是要把这个功劳归于产生灵感那一刻是不公平的。因为你是在成功之前,经历了很长的一个过程,你付出了很多努力,尝试了各种事情。


IJCAI Research Excellence Award 旨在奖励那些贯穿其职业生涯,坚持从事高品质研究项目,并得出众多实质性科研成果的科学家,该奖项获得者都是人工智能领域的顶尖科学家。今年 IJCAI Research Excellence Award 得主 Micheal I. Jordan,他与 Dan Klein 都曾是你的博士生导师。他们两位都是人工智能领域的优秀研究员。你在与 Jordan 教授一起工作时,印象最深刻的是什么?与 Klein 教授呢?


Liang:能有这两位导师,我真的很幸运。我不仅从他们各自的身上学到了很多,而且,我从他们那里学到的东西是互补的,而且不仅仅是在研究领域(机器学习和NLP)。


Mike 的知识领域非常广。他热衷于学习多个学科——生物学、计算机科学、统计学等。而且,他学完材料后就能教授新的课程。他还有一种神奇的能力,能极其清楚地认识这个世界,从纷繁复杂的数学知识点钟,提炼出观点的精华。


Dan 对于经验性问题有着非常好的直觉。他教会我如何理解数据。他对于建立模型和发展技巧有非常高端、独到的审美。我还从 Dan 身上学到了很多关于写作、展示、教学和提建议的技能,这些都给我带来了极大的帮助。


你在ICML'16组织了一个研讨会“Reliable ML in the Wild”,邀请机器人技术、自动驾驶系统控制等其他领域的研究人员共同探讨如何开发可靠的机器学习技术。我们能从这种跨学科的方式中获得哪些推进机器学习的概念或者方法呢?


Liang:首先,我要解释一下这个研讨会的背景。机器学习在不断发展。当我还在读研究生的时候,机器学习的领域还不算大,人们做一些有趣的研究,都只是实验阶段——当时也有一些重大的应用,但是受的约束比较多,处于“in vivo”状态。而现在呢,情况完全不同。在极短的时间内,机器学习就被到处应用,而且,这一趋势将继续下去。


我和我的学生 Jacob Steinhardt 都意识到,还有许多重要的问题需要解决:当测试条件与训练的时候大不相同时会发生什么?系统能否 fail gracefully(编注:指程序检测到致命错误时,留下记录并自动退出)或者学会适应?我们应当如何应对机器学习反对者?这些问题,我们现在还没有好的答案。这个研讨会的目的是,把研究与可靠的机器学习相关主题(比如领域适应、安全强化学习、因果关系)的研究人员聚到一起,尝试建立起联系、形成一个团体。


接收论文:6.3% 的论文有一名作者来自谷歌或 DeepMind


根据粗略统计,在 ICML 2017 接收的 434 篇论文中,有大约 25% 的论文第一作者是华人


1. 子领域


今年 ICML 子领域的热度,我们不妨来看 session 的主题。本届会议共有 10 个 session(不包括 Workshop、Poster 等 session),每个 session 会有 5 篇左右论文 talk。因此,session 的主题从一定程度上可以看出这次接收论文的子领域分布。


其中,深度学习出现了 9 次(几乎每个 session 都有深度学习),其次是连续优化(7次)、强化学习(6次)和 RNN(4次)、在线学习(4次)。


2. 机构


今年早些时候,特斯拉 AI 和视觉负责人,此前在 OpenAI 担任研究员的 Andrew Karpathy 对 ICML 2017 接收的论文做了个统计[1]。在全部 1600 多篇论文中,出现了 961 所机构,其中 420 家出现了一次。Karpathy 将“Google”、“Google Inc.”和“Google Brain Research”归在一起,“Stanford”和“Stanford University”也都归在一起,得出排名前 30 的机构是:


       44 Google

       33 Microsoft

       32 CMU

       25 DeepMind

       23 MIT

       22 Berkeley

       22 Stanford

       16 Cambridge

       16 Princeton

       15 None

       14 Georgia Tech

       13 Oxford

       11 UT Austin

       10 Duke

       10 Facebook

        9 ETH Zurich

        9 EPFL

        8 Columbia

        8 Harvard

        8 Michigan

        7 UCSD

        7 IBM

        7 New York

        7 Peking

        6 Cornell

        6 Washington

        6 Minnesota

        5 Virginia

        5 Weizmann Institute of Science

        5 Microsoft / Princeton / IAS


其中,排名 15 的“None”指代什么,Karpathy 本人也不太清楚。此外,Microsoft、Princeton 多次出现,还有“New York”出现了 7 次。不过,上面这个统计还是很能说明问题。


Karpathy 将产业研究院的实验室(DeepMind、谷歌、微软、Facebook、IBM、迪士尼、亚马逊和 Adobe)从全部当中提出来,计算后发现:


ICML 2017 接收的论文中,大约有 20~25% 的论文有产业参与,6.3% 的有一名来自谷歌或 DeepMind 的作者


虽然 Karpathy 在他统计后指出,仍然有大约 75% 的论文出自学界,但这实际上印证了一个趋势,那就是在深度学习/机器学习和人工智能国际顶级学术会议所接受/发表的论文中,产业的参与越来越多


ICML 2017 主席团队:谷歌占比超过 10%, 华人不到 10 人


ICML 2017 主席团队力量强大,人数将近 130 人,均来自世界顶级高校和研究机构,并且产业界气息浓厚。大会主席  Tony Jebara 就是哥伦比亚大学 & Netflix 产学结合,两位程序主席 Doina Precup 和 Yee Whye Teh 分别来自麦吉尔大学和牛津大学。


据新智元统计,在近 110 位领域主席(不包括 Tutorial、Workshop、Publicity 等主席)中,


  • 谷歌有 11 人,其中谷歌大脑 4 人(Ian Goodfellow 在 PC 手册上写的是  OpenAI,新智元在统计时将他归为谷歌大脑)

  • DeepMind 7 人

  • 微软研究院 7 人

  • Facebook 4 人,包括 FAIR 3 人

  • OpenAI 2 人

  • 腾讯 AI Lab 1 人

  • NEC 实验室 1 人


这些领域主席中有 Rich Sutton、Jürgen Schmidhuber 这样的老一辈,也有 Ruslan Salakhutdinov(苹果 AI 负责人、CMU 教授)、Nando de Freitas(牛津大学 & DeepMind)这样的中坚,还有 Oriol Vinyals(DeepMind)、Ian Goodfellow(谷歌大脑)这样的新一代。


看看主席团队里的华人学者已经成为新智元学术会议报道的惯例。这一次,我们发现了清华大学的朱军、微软研究院首席研究员李力鸿、腾讯 AI Lab 主任张潼的身影。


但纵观整个主席团队(130 多人),华人不超过 10 人。


DeepMind、谷歌和微软这 3 家的论文一共有 96 是今年 ICML 接收论文的 22%。下面我们分别来看看:


DeepMind:论文 19 篇,领域主席 7 位,论文基本出自自家之手


强化学习和神经网络是 DeepMind 本届 ICML 接收论文的关键词,这一点从论文题目中便一目了然。


不过,一个更有趣的特点是,DeepMind 的论文似乎多出自“自家”之手,19 篇论文里有 15 篇的作者全都来自 DeepMind。因此,一个人(与其他人合作)发表 3~5 篇论文的情况并不少见,像 Oriol Vinyals 更是一人参与了 6 篇。


DeepMind 与其他机构合作的 4 篇论文中,有 3 篇都是与谷歌大脑的研究人员合作。


回到论文研究的主题,


  • 更好地理解强化学习:分布式角度看强化学习,强化学习中的后验取样,强化学习中的 Minimax Regret Bounds,分层级的强化学习

  • 更好地理解和训练神经网络:为什么 DNN 泛化性能高?神经网络的课程学习,使用 Synthetic Gradients 随时更新网络(不用等反向传播全部做完)

  • 还有 DeepMind 一贯强调的 AI 与神经科学融合:从认知心理学角度看神经网络,Neural Episodic Control


领域主席:Danilo Rezende, James Martens, Nando de Freitas, Oriol Vinyals, Raia Hadsell, Razvan Pascanu, Shakir Mohamed

 

接收论文及作者:


Decoupled Neural Interfaces using Synthetic Gradients

Max Jaderberg, Wojciech Marian Czarnecki, Simon Osindero, Oriol Vinyals, Alex Graves, David Silver, Koray Kavukcuoglu


Parallel Multiscale Autoregressive Density Estimation

Scott Reed, Aäron van den Oord, Nal Kalchbrenner, Ziyu Wang, Dan Belov, Nando de Freitas


Understanding Synthetic Gradients and Decoupled Neural Interfaces

Wojtek Czarnecki, Grzegorz Świrszcz, Max Jaderberg, Simon Osindero, Oriol Vinyals, Koray Kavukcuoglu


Minimax Regret Bounds for Reinforcement Learning

Mohammad Gheshlaghi Azar, Ian Osband, Remi Munos


Video Pixel Networks

Nal Kalchbrenner, Aaron van den Oord, Karen Simonyan, Ivo Danihelka, Oriol Vinyals, Alex Graves, Koray Kavukcuoglu


Sharp Minima Can Generalize For Deep Nets

Laurent Dinh (Univ. Montreal), Razvan Pascanu, Samy Bengio (Google Brain), Yoshua Bengio (Univ. Montreal)


Why is Posterior Sampling Better than Optimism for Reinforcement Learning?

Ian Osband, Benjamin Van Roy


DARLA: Improving Zero-Shot Transfer in Reinforcement Learning

Irina Higgins*, Arka Pal*, Andrei Rusu, Loic Matthey, Chris Burgess, Alexander Pritzel, Matt Botvinick, Charles Blundell, Alexander Lerchner


Automated Curriculum Learning for Neural Networks

Alex Graves, Marc G. Bellemare, Jacob Menick, Koray Kavukcuoglu, Remi Munos


Learning to learn without gradient descent by gradient descent

Yutian Chen, Matthew Hoffman, Sergio Gomez, Misha Denil, Timothy Lillicrap, Matthew Botvinick , Nando de Freitas


A Distributional Perspective on Reinforcement Learning

Marc G. Bellemare*, Will Dabney*, Remi Munos


A Laplacian Framework for Option Discovery in Reinforcement Learning

Marlos Machado (Univ. Alberta), Marc G. Bellemare, Michael Bowling


Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders

Sander Dieleman, Karen Simonyan, Jesse Engel (Google Brain), Cinjon Resnick (Google Brain), Adam Roberts (Google Brain), Douglas Eck (Google Brain), Mohammad Norouzi (Google Brain)


Cognitive Psychology for Deep Neural Networks: A Shape Bias Case Study

Samuel Ritter*, David Barrett*, Adam Santoro, Matt Botvinick


Count-Based Exploration with Neural Density Models

Georg Ostrovski, Marc Bellemare, Aaron van den Oord, Remi Munos


The Predictron: End-to-End Learning and Planning

David Silver, Hado van Hasselt, Matteo Hessel, Tom Schaul, Arthur Guez, Tim Harley, Gabriel Dulac-Arnold, David Reichert, Neil Rabinowitz, Andre Barreto, Thomas Degris


FeUdal Networks for Hierarchical Reinforcement Learning

Sasha Vezhnevets, Simon Osindero, Tom Schaul, Nicolas Hees, Max Jaderberg, David Silver, Koray Kavukcuoglu


Neural Episodic Control

Alex Pritzel, Benigno Uria, Sriram Srinivasan, Adria Puigdomenech, Oriol Vinyals, Demis Hassabis, Daan Wierstra, Charles Blundell


Neural Message Passing Learns Quantum Chemistry

Justin Gilmer (Google Brain), Sam Schoenholz (Google Brain), Patrick Riley (Google Google), Oriol Vinyals, George Dahl (Google Brain)


谷歌:论文 38 篇,11 位领域主席,13 场研讨会研习会


ICML 2017 谷歌有 130 多人参加,作为铂金赞助商,谷歌除了 38 篇接收论文(其中 11 篇作者全部来自谷歌,这里不算 DeepMind),还在 12 场研讨会(workshop)和 1 场研习会(tutorial)有露出,这些露出或是筹办了活动,或是有受邀讲者,或是有论文被接收,或是以上都有。


谷歌论文的关键词也是神经网络(6篇)和强化学习(4篇),但其他机器学习子领域也有涉及,比如数据集、特征选择、目标函数、聚类算法、最邻近、遗传算法、高斯过程、MCMC 等等。


同时,在谷歌参与的 13 场研讨会和研习会中,一是有应用,比如关注语音和自然语言处理,讨论安全和隐私,再来是关注研究本身,比如人类可解释的机器学习,以及可以复现的(reproducibility and replication)机器学习,尤其是后者,在论文数量越来越多,更新速度越来越快的现在,实在是非常重要的议题。


程序委员会成员:

Alex Kulesza, Amr Ahmed, Andrew Dai, Corinna Cortes, George Dahl, Hugo Larochelle, Matthew Hoffman, Maya Gupta, Moritz Hardt, Quoc Le


大会赞助联席主席:Ryan Adams


Robust Adversarial Reinforcement Learning

Lerrel Pinto, James Davidson, Rahul Sukthankar, Abhinav Gupta


Tight Bounds for Approximate Carathéodory and Beyond

Vahab Mirrokni, Renato Leme, Adrian Vladu, Sam Wong


Sharp Minima Can Generalize For Deep Nets(与 DeepMind 合作)

Laurent Dinh, Razvan Pascanu, Samy Bengio, Yoshua Bengio


Geometry of Neural Network Loss Surfaces via Random Matrix Theory

Jeffrey Pennington, Yasaman Bahri


Conditional Image Synthesis with Auxiliary Classifier GANs

Augustus Odena, Christopher Olah, Jon Shlens


Learning Deep Latent Gaussian Models with Markov Chain Monte Carlo

Maithra Raghu, Ben Poole, Surya Ganguli, Jon Kleinberg, Jascha Sohl-Dickstein


On the Expressive Power of Deep Neural Networks

Maithra Raghu, Ben Poole, Surya Ganguli, Jon Kleinberg, Jascha Sohl-Dickstein


AdaNet: Adaptive Structural Learning of Artificial Neural Networks

Corinna Cortes, Xavi Gonzalvo, Vitaly Kuznetsov, Mehryar Mohri, Scott Yang


Learned Optimizers that Scale and Generalize

Olga Wichrowska, Niru Maheswaranathan, Matthew Hoffman, Sergio Gomez, Misha Denil, Nando de Freitas, Jascha Sohl-Dickstein


Adaptive Feature Selection: Computationally Efficient Online Sparse Linear Regression under RIP

Satyen Kale, Zohar Karnin, Tengyuan Liang, David Pal


Algorithms for ℓp Low-Rank Approximation

Flavio Chierichetti, Sreenivas Gollapudi, Ravi Kumar, Silvio Lattanzi, Rina Panigrahy, David Woodruff


Consistent k-Clustering

Silvio Lattanzi, Sergei Vassilvitskii


Input Switched Affine Networks: An RNN Architecture Designed for Interpretability

Jakob Foerster, Justin Gilmer, Jan Chorowski, Jascha Sohl-Dickstein, David Sussillo


Online and Linear-Time Attention by Enforcing Monotonic Alignments

Colin Raffel, Thang Luong, Peter Liu, Ron Weiss, Douglas Eck


Gradient Boosted Decision Trees for High Dimensional Sparse Output

Si Si, Huan Zhang, Sathiya Keerthi, Dhruv Mahajan, Inderjit Dhillon, Cho-Jui Hsieh


Sequence Tutor: Conservative fine-tuning of sequence generation models with KL-control

Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, Jose Hernandez-Lobato, Richard E Turner, Douglas Eck


Uniform Convergence Rates for Kernel Density Estimation

Heinrich Jiang


Density Level Set Estimation on Manifolds with DBSCAN

Heinrich Jiang


Maximum Selection and Ranking under Noisy Comparisons

Moein Falahatgar, Alon Orlitsky, Venkatadheeraj Pichapati, Ananda Suresh


Neural Audio Synthesis of Musical Notes with WaveNet Autoencoders

Cinjon Resnick, Adam Roberts, Jesse Engel, Douglas Eck, Sander Dieleman, Karen Simonyan, Mohammad Norouzi


Distributed Mean Estimation with Limited Communication

Ananda Suresh, Felix Yu, Sanjiv Kumar, Brendan McMahan


Learning to Generate Long-term Future via Hierarchical Prediction

Ruben Villegas, Jimei Yang, Yuliang Zou, Sungryull Sohn, Xunyu Lin, Honglak Lee


Variational Boosting: Iteratively Refining Posterior Approximations

Andrew Miller, Nicholas J Foti, Ryan Adams


RobustFill: Neural Program Learning under Noisy I/O

Jacob Devlin, Jonathan Uesato, Surya Bhupatiraju, Rishabh Singh, Abdel-rahman Mohamed, Pushmeet Kohli


A Unified Maximum Likelihood Approach for Estimating Symmetric Properties of Discrete Distributions

Jayadev Acharya, Hirakendu Das, Alon Orlitsky, Ananda Suresh


Axiomatic Attribution for Deep Networks

Ankur Taly, Qiqi Yan,,Mukund Sundararajan


Differentiable Programs with Neural Libraries

Alex L Gaunt, Marc Brockschmidt, Nate Kushman, Daniel Tarlow


Latent LSTM Allocation: Joint Clustering and Non-Linear Dynamic Modeling of Sequence Data 

Manzil Zaheer, Amr Ahmed, Alex Smola


Device Placement Optimization with Reinforcement Learning

Azalia Mirhoseini, Hieu Pham, Quoc Le, Benoit Steiner, Mohammad Norouzi, Rasmus Larsen, Yuefeng Zhou, Naveen Kumar, Samy Bengio, Jeff Dean


Canopy — Fast Sampling with Cover Trees

Manzil Zaheer, Satwik Kottur, Amr Ahmed, Jose Moura, Alex Smola


Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning

Junhyuk Oh, Satinder Singh, Honglak Lee, Pushmeet Kohli


Probabilistic Submodular Maximization in Sub-Linear Time

Serban Stan, Morteza Zadimoghaddam, Andreas Krause, Amin Karbasi


Deep Value Networks Learn to Evaluate and Iteratively Refine Structured Outputs

Michael Gygli, Mohammad Norouzi, Anelia Angelova


Stochastic Generative Hashing

Bo Dai, Ruiqi Guo, Sanjiv Kumar, Niao He, Le Song


Accelerating Eulerian Fluid Simulation With Convolutional Networks

Jonathan Tompson, Kristofer D Schlachter, Pablo Sprechmann, Ken Perlin


Large-Scale Evolution of Image Classifiers

Esteban Real, Sherry Moore, Andrew Selle, Saurabh Saxena, Yutaka Leon Suematsu, Jie Tan, Quoc Le, Alexey Kurakin


Neural Message Passing for Quantum Chemistry

Justin Gilmer, Samuel Schoenholz, Patrick Riley, Oriol Vinyals, George Dahl


Neural Optimizer Search with Reinforcement Learning

Irwan Bello, Barret Zoph, Vijay Vasudevan, Quoc Le


微软:论文 39 篇,程序委员会成员 7 名,广泛与高校和研究机构合作


微软无疑是本届会议的王者,有 39 篇论文被接收。不过,统计微软 ICML 2017 论文关键词之前,我们首先注意到的,是作者的所属机构——与 DeepMind 形成了鲜明对比,微软的论文基本上都至少与一所高校或研究机构有合作,合作机构也包括谷歌、Facebook 或亚马逊,这或许是微软有多达 39 篇论文被接收的一个原因


程序委员会成员:


Alekh Agarwal (Microsoft Research)

Lester Mackey (Microsoft Research)

李力鸿 (Microsoft Research)

Pushmeet Kohli (Microsoft Research)

Ryota Tomioka (Microsoft Research)

Sebastian Bubeck (Microsoft Research)

Sebastian Nowozin (Microsoft Research)


接收论文及作者:


Doubly Accelerated Methods for Faster CCA and Generalized Eigendecomposition

Zeyuan Allen-Zhu (Microsoft Research / Princeton / IAS) · Yuanzhi Li (Princeton University)


Follow the Compressed Leader: Even Faster Online Learning of Eigenvectors

Zeyuan Allen-Zhu (Microsoft Research / Princeton / IAS) · Yuanzhi Li (Princeton University)


Faster Principal Component Regression via Optimal Polynomial Approximation to Matrix sgn(x)

Zeyuan Allen-Zhu (Microsoft Research / Princeton / IAS) · Yuanzhi Li (Princeton University)


Sequence Modeling via Segmentations

Chong Wang (Microsoft Research) · Yining Wang (CMU) · Po-Sen Huang (Microsoft Research) · Abdelrahman Mohammad (Microsoft) · Dengyong Zhou (Microsoft Research) · Li Deng (Citadel)


Measuring Sample Quality with Kernels

Jackson Gorham (STANFORD) · Lester Mackey (Microsoft Research)


Asynchronous Stochastic Gradient Descent with Delay Compensation

Shuxin Zheng (University of Science and Technology of China) · Qi Meng (Peking University) · Taifeng Wang (Microsoft Research) · Wei Chen (Microsoft Research) · Tie-Yan Liu (Microsoft)


Natasha: Faster Non-Convex Stochastic Optimization Via Strongly Non-Convex Parameter

Zeyuan Allen-Zhu (Microsoft Research / Princeton / IAS)


Near-Optimal Design of Experiments via Regret Minimization

Zeyuan Allen-Zhu (Microsoft Research / Princeton / IAS) · Yuanzhi Li (Princeton University) · Aarti Singh (Carnegie Mellon University, Pittsburgh) · Yining Wang (CMU)


Contextual Decision Processes with low Bellman rank are PAC-Learnable

Nan Jiang (Microsoft Research) · Akshay Krishnamurthy (UMass) · Alekh Agarwal (Microsoft Research) · John Langford (Microsoft Research) · Robert Schapire (Microsoft Research)


Logarithmic Time One-Against-Some

Hal Daumé (University of Maryland) · Nikos Karampatziakis (Microsoft) · John Langford (Microsoft Research) · Paul Mineiro (Microsoft)


Optimal and Adaptive Off-policy Evaluation in Contextual Bandits

Yu-Xiang Wang (Carnegie Mellon University / Amazon AWS) · Alekh Agarwal (Microsoft Research) · Miroslav Dudik (Microsoft Research)


Safety-Aware Algorithms for Adversarial Contextual Bandit

Wen Sun (Carnegie Mellon University) · Debadeepta Dey (Microsoft) · Ashish Kapoor (Microsoft Research)


How to Escape Saddle Points Efficiently

Chi Jin (UC Berkeley) · Rong Ge (Duke University) · Praneeth Netrapalli (Microsoft Research) · Sham M. Kakade (University of Washington) · Michael Jordan (UC Berkeley)


Stochastic Variance Reduction Methods for Policy Evaluation

Simon Du (Carnegie Mellon University) · Jianshu Chen (Microsoft Research) · Lihong Li (Microsoft Research) · Lin Xiao (Microsoft Research) · Dengyong Zhou (Microsoft Research)


Provable Optimal Algorithms for Generalized Linear Contextual Bandits

Lihong Li (Microsoft Research) · Yu Lu (Yale University) · Dengyong Zhou (Microsoft Research)


Learning Continuous Semantic Representations of Symbolic Expressions

Miltiadis Allamanis (Microsoft Research) · Pankajan Chanthirasegaran · Pushmeet Kohli (Microsoft Research) · Charles Sutton (University of Edinburgh)


RobustFill: Neural Program Learning under Noisy I/O

Jacob Devlin (Microsoft Research) · Jonathan Uesato (MIT) · Surya Bhupatiraju (MIT) · Rishabh Singh (Microsoft Research) · Abdelrahman Mohammad (Microsoft) · Pushmeet Kohli (Microsoft Research)


Adversarial Variational Bayes: Unifying Variational Autoencoders and Generative Adversarial Networks

Lars Mescheder (MPI Tübingen) · Sebastian Nowozin (Microsoft Research) · Andreas Geiger (MPI Tübingen)


ProtoNN: Compressed and Accurate kNN for Resource-scarce Devices

Chirag Gupta (Microsoft Research, India) · Arun Suggala (Carnegie Mellon University) · Ankit Goyal (University of Michigan) · Saurabh Goyal (IBM India Pvt Ltd) · Ashish Kumar (Microsoft Research) · Bhargavi Paranjape (Microsoft Research) · Harsha Vardhan Simhadri (Microsoft Research) · Raghavendra Udupa (Microsoft Research) · Manik Varma (Microsoft Research) · Prateek Jain (Microsoft Research)


Optimal algorithms for smooth and strongly convex distributed optimization in networks

Kevin Scaman (MSR-INRIA Joint Center) · Yin Tat Lee (Microsoft Research) · Francis Bach (INRIA) · Sebastien Bubeck (Microsoft Research) · Laurent Massoulié (MSR-INRIA Joint Center)


Resource-efficient Machine Learning in 2 KB RAM for the Internet of Things

Ashish Kumar (Microsoft Research) · Saurabh Goyal (IBM India Pvt Ltd) · Manik Varma (Microsoft Research)


Batched High-dimensional Bayesian Optimization via Structural Kernel Learning

Zi Wang (MIT) · Chengtao Li· Stefanie Jegelka (MIT) · Pushmeet Kohli (Microsoft Research)


Recovery Guarantees for One-hidden-layer Neural Networks

Kai Zhong (University of Texas at Austin) · Zhao Song (UT-Austin) · Prateek Jain (Microsoft Research) · Peter Bartlett (UC Berkeley) · Inderjit Dhillon (UT Austin & Amazon)


Dual Supervised Learning

Yingce Xia (University of Science and Technology of China) · Tao Qin (Microsoft Research Asia) · Wei Chen (Microsoft Research) · Jiang Bian (Microsoft Research) · Nenghai Yu (USTC) · Tie-Yan Liu (Microsoft)


Improving Gibbs Sampler Scan Quality with DoGS

Ioannis Mitliagkas (Stanford University) · Lester Mackey (Microsoft Research)


Nearly Optimal Robust Matrix Completion

Yeshwanth Cherapanamjeri (Microsoft Research) · Prateek Jain (Microsoft Research) · Kartik Gupta (Microsoft Research)


Stabilising Experience Replay for Deep Multi-Agent Reinforcement Learning

Jakob Foerster (University of Oxford) · Nantas Nardelli (University of Oxford) · Gregory Farquhar (University of Oxford) · Phil Torr (Oxford) · Pushmeet Kohli (Microsoft Research) · Shimon Whiteson (University of Oxford)


Differentiable Programs with Neural Libraries(与谷歌大脑合作)

Alex Gaunt (Microsoft) · Marc Brockschmidt (Microsoft Research) · Nate Kushman (Microsoft Research) · Daniel Tarlow (Google Brain)


Active Heteroscedastic Regression

Kamalika Chaudhuri (University of California at San Diego) · Prateek Jain (Microsoft Research) · Nagarajan Natarajan (Microsoft Research)


Consistency Analysis for Binary Classification Revisited

Wojciech Kotlowski (Poznan University of Technology) · Nagarajan Natarajan (Microsoft Research) · Krzysztof Dembczynski (Poznan University of Technology) · Oluwasanmi Koyejo (University of Illinois at Urbana-Champaign)


Active Learning for Cost-Sensitive Classification

Alekh Agarwal (Microsoft Research) · Akshay Krishnamurthy (UMass) · Tzu-Kuo Huang (Uber) · Hal Daumé III (University of Maryland) · John Langford (Microsoft Research)


Adaptive Neural Networks for Fast Test-Time Prediction

Tolga Bolukbasi (Boston University) · Joseph Wang (Amazon) · Ofer Dekel (Microsoft) · Venkatesh Saligrama (Boston University)


Zero-Shot Task Generalization with Multi-Task Deep Reinforcement Learning

Junhyuk Oh (University of Michigan) · Satinder Singh (University of Michigan) · Honglak Lee (Google / U. Michigan) · Pushmeet Kohli (Microsoft Research)


Robust Structured Estimation with Single-Index Models

Sheng Chen (University of Minnesota) · Arindam Banerjee (University of Minnesota) · Sreangsu Acharyya (Microsoft Research India)


Gradient Coding: Avoiding Stragglers in Distributed Learning

Rashish Tandon (University of Texas at Austin) · Qi Lei (University of Texas at Austin) · Alexandros Dimakis (UT Austin) · Nikos Karampatziakis (Microsoft)


Exploiting Strong Convexity from Data with Primal-Dual First-Order Algorithms

Jialei Wang (University of Chicago) · Lin Xiao (Microsoft Research)


Gradient Boosted Decision Trees for High Dimensional Sparse Output

Si Si (Google Research) · Huan Zhang (UC Davis) · Sathiya Keerthi (Microsoft) · Dhruv Mahajan (Facebook) · Inderjit Dhillon (UT Austin & Amazon) · Cho-Jui Hsieh (University of California, Davis)


Learning Algorithms for Active Learning

Philip Bachman (Maluuba) · Alessandro Sordoni (Microsoft Maluuba) · Adam Trischler (Maluuba)


Deep IV: A Flexible Approach for Counterfactual Prediction

Greg Lewis (Microsoft Research) · Matt Taddy (MICROSOFT) · Jason Hartford (University of British Columbia) · Kevin Leyton-Brown (University of British Columbia) 


登录查看更多
4

相关内容

CVPR 2020 最佳论文与最佳学生论文!
专知会员服务
35+阅读 · 2020年6月17日
【Google-CMU】元伪标签的元学习,Meta Pseudo Labels
专知会员服务
31+阅读 · 2020年3月30日
ICML2019机器学习顶会接受论文列表!
专知
10+阅读 · 2019年5月12日
AAAI 2019 四个杰出论文奖论文揭晓
算法与数学之美
5+阅读 · 2019年5月11日
19篇ICML2019论文摘录选读!
专知
28+阅读 · 2019年4月28日
AAAI 2019最佳论文公布,CMU、斯坦福、MIT上榜
新智元
12+阅读 · 2019年1月28日
AAAI 2018 五个论文奖全部揭晓,「记忆增强的蒙特卡洛树搜索」获杰出论文
北京思腾合力科技有限公司
5+阅读 · 2018年2月8日
q-Space Novelty Detection with Variational Autoencoders
Next Item Recommendation with Self-Attention
Arxiv
5+阅读 · 2018年8月25日
Arxiv
23+阅读 · 2018年8月3日
Arxiv
8+阅读 · 2018年3月17日
VIP会员
相关VIP内容
CVPR 2020 最佳论文与最佳学生论文!
专知会员服务
35+阅读 · 2020年6月17日
【Google-CMU】元伪标签的元学习,Meta Pseudo Labels
专知会员服务
31+阅读 · 2020年3月30日
Top
微信扫码咨询专知VIP会员