论文摘要: 本文考虑了对未知环境进行有效探索的问题,这是人工智能的一个关键挑战。我们提出了一个“学习探索”框架,可以从各种环境中学习政策。在测试时,由于存在来自相同分布的未知环境,该策略旨在推广探索策略,以有限的步骤访问最大数量的唯一状态。我们特别关注在许多重要的实际应用程序(例如软件测试和地图构建)中遇到的具有图结构状态空间的环境。我们将此任务表述为强化学习问题,其中“探索”特工因过渡到以前未见过的环境状态而受到奖励,并使用图形结构化的内存来编码特工的过去轨迹。实验结果表明,我们的方法对于探索空间图非常有效;并且当解决领域特定程序和实际移动应用程序的覆盖率指导的软件测试所面临的挑战性问题时,它的性能要优于人类专家人工设计的方法。

论文目录:

  1. 介绍(Introduction)
  2. 问题制定(Problem Formulation)
  3. 模型(Model)
  4. 实验(Experiments)
    • 2D迷宫探索(Synthetic 2D Maze Exploration)
    • 生成用于测试域特定程序的输入( Generating Inputs for Testing Domain Specific Programs )
    • APP测试(APP Testing)
  5. 相关工作(Related work)
  6. 结论(Conclusion)
成为VIP会员查看完整内容
0
4

相关内容

一家美国的跨国科技企业,致力于互联网搜索、云计算、广告技术等领域,由当时在斯坦福大学攻读理学博士的拉里·佩奇和谢尔盖·布林共同创建。创始之初,Google 官方的公司使命为「整合全球范围的信息,使人人皆可访问并从中受益」。 Google 开发并提供了大量基于互联网的产品与服务,其主要利润来自于 AdWords 等广告服务。

2004 年 8 月 19 日, 公司以「GOOG」为代码正式登陆纳斯达克交易所。

元学习已被提出作为一个框架来解决具有挑战性的小样本学习设置。关键的思想是利用大量相似的小样本任务,以学习如何使基学习者适应只有少数标记的样本可用的新任务。由于深度神经网络(DNNs)倾向于只使用少数样本进行过度拟合,元学习通常使用浅层神经网络(SNNs),因此限制了其有效性。本文提出了一种新的学习方法——元转移学习(MTL)。具体来说,“meta”是指训练多个任务,“transfer”是通过学习每个任务的DNN权值的缩放和变换函数来实现的。此外,我们还介绍了作为一种有效的MTL学习课程的困难任务元批处理方案。我们使用(5类,1次)和(5类,5次)识别任务,在两个具有挑战性的小样本学习基准上进行实验:miniImageNet和Fewshot-CIFAR100。通过与相关文献的大量比较,验证了本文提出的HT元批处理方案训练的元转移学习方法具有良好的学习效果。消融研究还表明,这两种成分有助于快速收敛和高精度。

地址:

https://arxiv.org/abs/1812.02391

代码:

https://github.com/yaoyao-liu/meta-transfer-learning

成为VIP会员查看完整内容
0
122

摘要:

推荐系统经常面对包含高度个性化的用户历史数据的异构数据集,单个模型无法为每个用户提供最佳的推荐。我们在公共和私有数据集上观察到这种普遍存在的现象,并解决了为每个用户优化推荐质量的模型选择问题。我们提出了一个元学习框架,以促进用户级自适应模型选择推荐系统。在该框架中,用来自所有用户的数据对推荐器集合进行训练,在此基础上通过元学习对模型选择器进行训练,为具有特定用户历史数据的每个用户选择最佳模型。我们在两个公共数据集和一个真实的生产数据集上进行了大量的实验,证明我们提出的框架在AUC和LogLoss方面实现了对单个模型基线和样本级模型选择器的改进。特别是,这些改进可能会带来巨大的利润收益时,部署在网上推荐系统。

地址:

https://arxiv.org/abs/2001.10378

成为VIP会员查看完整内容
0
51

Mining graph data has become a popular research topic in computer science and has been widely studied in both academia and industry given the increasing amount of network data in the recent years. However, the huge amount of network data has posed great challenges for efficient analysis. This motivates the advent of graph representation which maps the graph into a low-dimension vector space, keeping original graph structure and supporting graph inference. The investigation on efficient representation of a graph has profound theoretical significance and important realistic meaning, we therefore introduce some basic ideas in graph representation/network embedding as well as some representative models in this chapter.

0
19
下载
预览

论文题目

FEW SHOT LINK PREDICTION VIA META LEARNING

论文摘要

我们考虑了样本链接预测的任务,其目标是仅使用一个小样本的已知边来预测多个图中的未命中边。但是,目前的链路预测方法通常不适合处理这项任务,因为它们无法在多图环境中有效地在图之间传递知识,也无法有效地从非常稀疏的数据中学习。为了应对这一挑战,我们引入了一个新的基于梯度的元学习框架meta Graph,它利用高阶梯度和一个学习的Graph sig nature函数来有条件地生成一个Graph神经网络初始化,我们证明,元图形不仅可以快速适应,而且可以更好地最终收敛,并且仅使用一小部分真实边缘样本就可以有效地学习。

论文作者

Avishek Joey Bose*来自麦吉尔大学。

成为VIP会员查看完整内容
0
44

报告主题: 模仿学习前沿进展

报告摘要: 时空跟踪和传感数据的不断发展,现在使得在广泛的领域中对细粒度的行为进行分析和建模成为可能。例如,现在正在收集每场NBA篮球比赛的跟踪数据,其中包括球员,裁判和以25 Hz跟踪的球,以及带有注释的比赛事件,如传球,射门和犯规。其他设置包括实验动物,公共场所的人员,设置诸如手术室,演员讲话和表演的演员,虚拟环境中的数字化身,自然现象(如空气动力学)以及其他计算系统的行为等专业人员。 在本演讲中,我将描述正在进行的研究,这些研究正在开发结构化模仿学习方法,以开发细粒度行为的预测模型。模仿学习是机器学习的一个分支,它处理模仿模仿的动态行为的学习。结构化模仿学习涉及施加严格的数学领域知识,这些知识可以(有时被证明)可以加速学习,并且还可以带来附带利益(例如Lyapunov稳定性或政策行为的可解释性)。我将提供基本问题设置的高级概述,以及对实验动物,专业运动,语音动画和昂贵的计算神谕进行建模的特定项目。

嘉宾介绍: Yisong Yue,博士,是加州理工学院计算与数学科学系的助理教授。他以前是迪斯尼研究院的研究科学家。在此之前,他是卡耐基梅隆大学机器学习系和iLab的博士后研究员。 Yisong的研究兴趣主要在于统计机器学习的理论和应用。他对开发用于交互式机器学习和结构化机器学习的新颖方法特别感兴趣。过去,他的研究已应用于信息检索,推荐系统,文本分类,从丰富的用户界面中学习,分析隐式人类反馈,临床治疗,辅导系统,数据驱动的动画,行为分析,运动分析,实验设计科学,优化学习,机器人技术政策学习以及自适应计划和分配问题。

成为VIP会员查看完整内容
0
29

Transfer learning aims at improving the performance of target learners on target domains by transferring the knowledge contained in different but related source domains. In this way, the dependence on a large number of target domain data can be reduced for constructing target learners. Due to the wide application prospects, transfer learning has become a popular and promising area in machine learning. Although there are already some valuable and impressive surveys on transfer learning, these surveys introduce approaches in a relatively isolated way and lack the recent advances in transfer learning. As the rapid expansion of the transfer learning area, it is both necessary and challenging to comprehensively review the relevant studies. This survey attempts to connect and systematize the existing transfer learning researches, as well as to summarize and interpret the mechanisms and the strategies in a comprehensive way, which may help readers have a better understanding of the current research status and ideas. Different from previous surveys, this survey paper reviews over forty representative transfer learning approaches from the perspectives of data and model. The applications of transfer learning are also briefly introduced. In order to show the performance of different transfer learning models, twenty representative transfer learning models are used for experiments. The models are performed on three different datasets, i.e., Amazon Reviews, Reuters-21578, and Office-31. And the experimental results demonstrate the importance of selecting appropriate transfer learning models for different applications in practice.

0
90
下载
预览

There is a recent large and growing interest in generative adversarial networks (GANs), which offer powerful features for generative modeling, density estimation, and energy function learning. GANs are difficult to train and evaluate but are capable of creating amazingly realistic, though synthetic, image data. Ideas stemming from GANs such as adversarial losses are creating research opportunities for other challenges such as domain adaptation. In this paper, we look at the field of GANs with emphasis on these areas of emerging research. To provide background for adversarial techniques, we survey the field of GANs, looking at the original formulation, training variants, evaluation methods, and extensions. Then we survey recent work on transfer learning, focusing on comparing different adversarial domain adaptation methods. Finally, we take a look forward to identify open research directions for GANs and domain adaptation, including some promising applications such as sensor-based human behavior modeling.

0
10
下载
预览

We introduce KBGAN, an adversarial learning framework to improve the performances of a wide range of existing knowledge graph embedding models. Because knowledge graphs typically only contain positive facts, sampling useful negative training examples is a non-trivial task. Replacing the head or tail entity of a fact with a uniformly randomly selected entity is a conventional method for generating negative facts, but the majority of the generated negative facts can be easily discriminated from positive facts, and will contribute little towards the training. Inspired by generative adversarial networks (GANs), we use one knowledge graph embedding model as a negative sample generator to assist the training of our desired model, which acts as the discriminator in GANs. This framework is independent of the concrete form of generator and discriminator, and therefore can utilize a wide variety of knowledge graph embedding models as its building blocks. In experiments, we adversarially train two translation-based models, TransE and TransD, each with assistance from one of the two probability-based models, DistMult and ComplEx. We evaluate the performances of KBGAN on the link prediction task, using three knowledge base completion datasets: FB15k-237, WN18 and WN18RR. Experimental results show that adversarial training substantially improves the performances of target embedding models under various settings.

0
5
下载
预览
小贴士
相关资讯
【论文笔记】Graph U-Nets
专知
60+阅读 · 2019年11月25日
元学习—Meta Learning的兴起
专知
38+阅读 · 2019年10月19日
强化学习扫盲贴:从Q-learning到DQN
夕小瑶的卖萌屋
33+阅读 · 2019年10月13日
ICML2019《元学习》教程与必读论文列表
专知
36+阅读 · 2019年6月16日
小样本学习(Few-shot Learning)综述
黑龙江大学自然语言处理实验室
27+阅读 · 2019年4月1日
相关论文
Wenwu Zhu,Xin Wang,Peng Cui
19+阅读 · 2020年1月2日
A Comprehensive Survey on Transfer Learning
Fuzhen Zhuang,Zhiyuan Qi,Keyu Duan,Dongbo Xi,Yongchun Zhu,Hengshu Zhu,Hui Xiong,Qing He
90+阅读 · 2019年11月7日
Continual Unsupervised Representation Learning
Dushyant Rao,Francesco Visin,Andrei A. Rusu,Yee Whye Teh,Razvan Pascanu,Raia Hadsell
5+阅读 · 2019年10月31日
Learning Disentangled Representations for Recommendation
Jianxin Ma,Chang Zhou,Peng Cui,Hongxia Yang,Wenwu Zhu
6+阅读 · 2019年10月31日
Maria Perez-Ortiz,Peter Tino,Rafal Mantiuk,Cesar Hervas-Martinez
3+阅读 · 2019年3月24日
Yu Cheng,Mo Yu,Xiaoxiao Guo,Bowen Zhou
12+阅读 · 2019年1月26日
Adversarial Transfer Learning
Garrett Wilson,Diane J. Cook
10+阅读 · 2018年12月6日
A Survey on Deep Transfer Learning
Chuanqi Tan,Fuchun Sun,Tao Kong,Wenchang Zhang,Chao Yang,Chunfang Liu
10+阅读 · 2018年8月6日
Ignasi Clavera,Anusha Nagabandi,Ronald S. Fearing,Pieter Abbeel,Sergey Levine,Chelsea Finn
7+阅读 · 2018年3月30日
Liwei Cai,William Yang Wang
5+阅读 · 2018年2月20日
Top