Unsupervised pretraining, which learns a useful representation using a large amount of unlabeled data to facilitate the learning of downstream tasks, is a critical component of modern large-scale machine learning systems. Despite its tremendous empirical success, the rigorous theoretical understanding of why unsupervised pretraining generally helps remains rather limited -- most existing results are restricted to particular methods or approaches for unsupervised pretraining with specialized structural assumptions. This paper studies a generic framework, where the unsupervised representation learning task is specified by an abstract class of latent variable models $\Phi$ and the downstream task is specified by a class of prediction functions $\Psi$. We consider a natural approach of using Maximum Likelihood Estimation (MLE) for unsupervised pretraining and Empirical Risk Minimization (ERM) for learning downstream tasks. We prove that, under a mild ''informative'' condition, our algorithm achieves an excess risk of $\tilde{\mathcal{O}}(\sqrt{\mathcal{C}_\Phi/m} + \sqrt{\mathcal{C}_\Psi/n})$ for downstream tasks, where $\mathcal{C}_\Phi, \mathcal{C}_\Psi$ are complexity measures of function classes $\Phi, \Psi$, and $m, n$ are the number of unlabeled and labeled data respectively. Comparing to the baseline of $\tilde{\mathcal{O}}(\sqrt{\mathcal{C}_{\Phi \circ \Psi}/n})$ achieved by performing supervised learning using only the labeled data, our result rigorously shows the benefit of unsupervised pretraining when $m \gg n$ and $\mathcal{C}_{\Phi\circ \Psi} > \mathcal{C}_\Psi$. This paper further shows that our generic framework covers a wide range of approaches for unsupervised pretraining, including factor models, Gaussian mixture models, and contrastive learning.


翻译:未监督的预培训, 使用大量未贴标签的数据来学习下游任务, 学习一个有用的演示 { 未监管的预培训, 是现代大型机器学习系统的关键组成部分 。 尽管它取得了巨大的实证成功, 但是对于未监管的预培训通常帮助的程度, 严格的理论理解仍然相当有限 -- 大多数现有结果都局限于特定的方法或方法, 使用专门的结构性假设进行未经监管的预培训。 本文研究一个通用框架, 其中未监管的演示学习任务由一组抽象的潜在变量模型 $\ Phi$ (Phi$) 指定, 下游任务由某类的预测函数 $\ Psi$( Psi$ ) 指定 。 我们考虑一种自然的方法, 使用最大相似的缩略缩缩缩缩缩缩( MLE) 来学习下游任务 。 我们证明, 在“ 信息化” 状态下, 我们的算法具有超额风险 $tilde a mathal {O} (\\\ prock) (clation_ Phi} C / mal} 和 modiaal motional listrational listrate listress listress listress ex laud ex ex ex lishts = = = = = = $ = $ = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = = </s>

0
下载
关闭预览

相关内容

专知会员服务
50+阅读 · 2020年12月14日
100+篇《自监督学习(Self-Supervised Learning)》论文最新合集
专知会员服务
161+阅读 · 2020年3月18日
Keras François Chollet 《Deep Learning with Python 》, 386页pdf
专知会员服务
144+阅读 · 2019年10月12日
强化学习最新教程,17页pdf
专知会员服务
168+阅读 · 2019年10月11日
[综述]深度学习下的场景文本检测与识别
专知会员服务
77+阅读 · 2019年10月10日
【SIGGRAPH2019】TensorFlow 2.0深度学习计算机图形学应用
专知会员服务
39+阅读 · 2019年10月9日
Multi-Task Learning的几篇综述文章
深度学习自然语言处理
15+阅读 · 2020年6月15日
Hierarchically Structured Meta-learning
CreateAMind
23+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
25+阅读 · 2019年5月18日
深度自进化聚类:Deep Self-Evolution Clustering
我爱读PAMI
14+阅读 · 2019年4月13日
强化学习的Unsupervised Meta-Learning
CreateAMind
17+阅读 · 2019年1月7日
无监督元学习表示学习
CreateAMind
26+阅读 · 2019年1月4日
Unsupervised Learning via Meta-Learning
CreateAMind
41+阅读 · 2019年1月3日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
16+阅读 · 2018年12月24日
【论文】变分推断(Variational inference)的总结
机器学习研究会
39+阅读 · 2017年11月16日
国家自然科学基金
0+阅读 · 2015年12月31日
国家自然科学基金
0+阅读 · 2014年12月31日
国家自然科学基金
0+阅读 · 2014年12月31日
国家自然科学基金
1+阅读 · 2014年12月31日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
0+阅读 · 2012年12月31日
国家自然科学基金
0+阅读 · 2011年12月31日
国家自然科学基金
0+阅读 · 2008年12月31日
Arxiv
19+阅读 · 2022年11月8日
Arxiv
11+阅读 · 2022年9月1日
Arxiv
19+阅读 · 2022年7月29日
Arxiv
14+阅读 · 2020年12月17日
Learning in the Frequency Domain
Arxiv
11+阅读 · 2020年3月12日
VIP会员
相关资讯
Multi-Task Learning的几篇综述文章
深度学习自然语言处理
15+阅读 · 2020年6月15日
Hierarchically Structured Meta-learning
CreateAMind
23+阅读 · 2019年5月22日
Transferring Knowledge across Learning Processes
CreateAMind
25+阅读 · 2019年5月18日
深度自进化聚类:Deep Self-Evolution Clustering
我爱读PAMI
14+阅读 · 2019年4月13日
强化学习的Unsupervised Meta-Learning
CreateAMind
17+阅读 · 2019年1月7日
无监督元学习表示学习
CreateAMind
26+阅读 · 2019年1月4日
Unsupervised Learning via Meta-Learning
CreateAMind
41+阅读 · 2019年1月3日
A Technical Overview of AI & ML in 2018 & Trends for 2019
待字闺中
16+阅读 · 2018年12月24日
【论文】变分推断(Variational inference)的总结
机器学习研究会
39+阅读 · 2017年11月16日
相关基金
国家自然科学基金
0+阅读 · 2015年12月31日
国家自然科学基金
0+阅读 · 2014年12月31日
国家自然科学基金
0+阅读 · 2014年12月31日
国家自然科学基金
1+阅读 · 2014年12月31日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
0+阅读 · 2013年12月31日
国家自然科学基金
0+阅读 · 2012年12月31日
国家自然科学基金
0+阅读 · 2011年12月31日
国家自然科学基金
0+阅读 · 2008年12月31日
Top
微信扫码咨询专知VIP会员