迁移学习(Transfer Learning)是一种机器学习方法,是把一个领域(即源领域)的知识,迁移到另外一个领域(即目标领域),使得目标领域能够取得更好的学习效果。迁移学习(TL)是机器学习(ML)中的一个研究问题,着重于存储在解决一个问题时获得的知识并将其应用于另一个但相关的问题。例如,在学习识别汽车时获得的知识可以在尝试识别卡车时应用。尽管这两个领域之间的正式联系是有限的,但这一领域的研究与心理学文献关于学习转移的悠久历史有关。从实践的角度来看,为学习新任务而重用或转移先前学习的任务中的信息可能会显着提高强化学习代理的样本效率。

知识荟萃

迁移学习荟萃20191209

综述

理论

模型算法

多源迁移学习

异构迁移学习

在线迁移学习

小样本学习

深度迁移学习

多任务学习

强化迁移学习

迁移度量学习

终身迁移学习

相关资源

领域专家

课程

代码

数据集

  • MNIST vs MNIST-M vs SVHN vs Synth vs USPS: digit images
  • GTSRB vs Syn Signs : traffic sign recognition datasets, transfer between real and synthetic signs.
  • NYU Depth Dataset V2: labeled paired images taken with two different cameras (normal and depth)
  • CelebA: faces of celebrities, offering the possibility to perform gender or hair color translation for instance
  • Office-Caltech dataset: images of office objects from 10 common categories shared by the Office-31 and Caltech-256 datasets. There are in total four domains: Amazon, Webcam, DSLR and Caltech.
  • Cityscapes dataset: street scene photos (source) and their annoted version (target)
  • UnityEyes vs MPIIGaze: simulated vs real gaze images (eyes)
  • CycleGAN datasets: horse2zebra, apple2orange, cezanne2photo, monet2photo, ukiyoe2photo, vangogh2photo, summer2winter
  • pix2pix dataset: edges2handbags, edges2shoes, facade, maps
  • RaFD: facial images with 8 different emotions (anger, disgust, fear, happiness, sadness, surprise, contempt, and neutral). You can transfer a face from one emotion to another.
  • VisDA 2017 classification dataset: 12 categories of object images in 2 domains: 3D-models and real images.
  • Office-Home dataset: images of objects in 4 domains: art, clipart, product and real-world.
  • DukeMTMC-reid and Market-1501: two pedestrian datasets collected at different places. The evaluation metric is based on open-set image retrieval.
  • Amazon review benchmark dataset: sentiment analysis for four kinds (domains) of reviews: books, DVDs, electronics, kitchen
  • ECML/PKDD Spam Filtering: emails from 3 different inboxes, that can represent the 3 domains.
  • 20 Newsgroup: collection of newsgroup documents across 6 top categories and 20 subcategories. Subcategories can play the role of the domains, as describe in this article.

实战


初步版本,水平有限,有错误或者不完善的地方,欢迎大家提建议和补充,会一直保持更新,本文为专知内容组原创内容,未经允许不得转载,如需转载请发送邮件至fangquanyi@gmail.com 或 联系微信专知小助手(Rancho_Fang)

敬请关注http://www.zhuanzhi.ai 和关注专知公众号,获取第一手AI相关知识

最近更新:2019-12-09

VIP内容

经典机器学习算法假设训练数据和测试数据具有相同的输入特征空间和相同的数据分布。在诸多现实问题中,这一假设往往不能满足,导致经典机器学习算法失效。领域自适应是一种新的学习范式,其关键技术在于通过学习新的特征表达来对齐源域和目标域的数据分布,使得在有标签源域训练的模型可以直接迁移到没有标签的目标域上,同时不会引起性能的明显损失。本文介绍领域自适应的定义,分类和代表性算法,重点讨论基于度量学习的领域自适应算法和基于对抗学习的领域自适应算法。最后,分析领域自适应的典型应用和存在挑战,明确领域自适应的发展趋势,并提出未来可能的研究方向。

成为VIP会员查看完整内容
0
24

最新论文

We study the transfer learning process between two linear regression problems. An important and timely special case is when the regressors are overparameterized and perfectly interpolate their training data. We examine a parameter transfer mechanism whereby a subset of the parameters of the target task solution are constrained to the values learned for a related source task. We analytically characterize the generalization error of the target task in terms of the salient factors in the transfer learning architecture, i.e., the number of examples available, the number of (free) parameters in each of the tasks, the number of parameters transferred from the source to target task, and the correlation between the two tasks. Our non-asymptotic analysis shows that the generalization error of the target task follows a two-dimensional double descent trend (with respect to the number of free parameters in each of the tasks) that is controlled by the transfer learning factors. Our analysis points to specific cases where the transfer of parameters is beneficial. Specifically, we show that transferring a specific set of parameters that generalizes well on the respective part of the source task can soften the demand on the task correlation level that is required for successful transfer learning. Moreover, we show that the usefulness of a transfer learning setting is fragile and depends on a delicate interplay among the set of transferred parameters, the relation between the tasks, and the true solution.

0
0
下载
预览
Top