MAML(Model-Agnostic Meta-Learning)是元学习(Meta learning)最经典的几个算法之一,出自论文《Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks》。 原文地址:https://arxiv.org/abs/1703.03400

VIP内容

图分类的目的是对图结构数据进行准确的信息提取和分类。在过去的几年里,图神经网络(GNNs)在图分类任务上取得了令人满意的成绩。然而,大多数基于GNNs的方法侧重于设计图卷积操作和图池操作,忽略了收集或标记图结构数据比基于网格的数据更困难。我们利用元学习来进行小样本图分类,以减少训练新任务时标记图样本的不足。更具体地说,为了促进图分类任务的学习,我们利用GNNs作为图嵌入主干,利用元学习作为训练范式,在图分类任务中快速捕获特定任务的知识并将其转移到新的任务中。为了提高元学习器的鲁棒性,我们设计了一种新的基于强化学习的步进控制器。实验表明,与基线相比,我们的框架运行良好。

成为VIP会员查看完整内容
0
46

最新论文

Liver cancer is one of the most common cancers worldwide. Due to inconspicuous texture changes of liver tumor, contrast-enhanced computed tomography (CT) imaging is effective for the diagnosis of liver cancer. In this paper, we focus on improving automated liver tumor segmentation by integrating multi-modal CT images. To this end, we propose a novel mutual learning (ML) strategy for effective and robust multi-modal liver tumor segmentation. Different from existing multi-modal methods that fuse information from different modalities by a single model, with ML, an ensemble of modality-specific models learn collaboratively and teach each other to distill both the characteristics and the commonality between high-level representations of different modalities. The proposed ML not only enables the superiority for multi-modal learning but can also handle missing modalities by transferring knowledge from existing modalities to missing ones. Additionally, we present a modality-aware (MA) module, where the modality-specific models are interconnected and calibrated with attention weights for adaptive information exchange. The proposed modality-aware mutual learning (MAML) method achieves promising results for liver tumor segmentation on a large-scale clinical dataset. Moreover, we show the efficacy and robustness of MAML for handling missing modalities on both the liver tumor and public brain tumor (BRATS 2018) datasets. Our code is available at https://github.com/YaoZhang93/MAML.

0
0
下载
预览
父主题
Top