Meta-learning algorithms adapt quickly to new tasks that are drawn from the same task distribution as the training tasks. The mechanism leading to fast adaptation is the conditioning of a downstream predictive model on the inferred representation of the task's underlying data generative process, or \emph{function}. This \emph{meta-representation}, which is computed from a few observed examples of the underlying function, is learned jointly with the predictive model. In this work, we study the implications of this joint training on the transferability of the meta-representations. Our goal is to learn meta-representations that are robust to noise in the data and facilitate solving a wide range of downstream tasks that share the same underlying functions. To this end, we propose a decoupled encoder-decoder approach to supervised meta-learning, where the encoder is trained with a contrastive objective to find a good representation of the underlying function. In particular, our training scheme is driven by the self-supervision signal indicating whether two sets of examples stem from the same function. Our experiments on a number of synthetic and real-world datasets show that the representations we obtain outperform strong baselines in terms of downstream performance and noise robustness, even when these baselines are trained in an end-to-end manner.
翻译:元学习算法快速适应与培训任务相同的任务分配所产生的新任务。 导致快速适应的机制是下游预测模型的设置, 以该任务的基本数据基因化过程或\emph{函数}的推断表示为条件。 此计算法是从基本功能的几个观察到的例子中得出的, 与预测模型共同学习的。 在这项工作中, 我们研究这一联合培训对元代表的可转移性的影响。 我们的目标是学习对数据噪音具有活力的元代表法, 并便利解决一系列具有相同基本功能的下游任务。 为此, 我们提议采用解码编码解码解码解码方法来监督元学习, 以对比性为目的对编码进行训练, 以找到基本功能的良好代表性。 特别是, 我们的培训计划是由自我监督信号驱动的, 表明两组例子是否来自同一功能。 我们关于合成和真实世界数据组的实验, 有助于解决一系列具有相同基本功能的下游任务。 为此, 我们提出了一种分离的编码解码解码解码解码解码法方法, 显示, 当我们经过训练的下游级标准中, 显示, 一种强型的基线显示, 我们的下游级的精确度基线显示, 的基线显示, 我们的精确度是强型基准。