Standard meta-learning for representation learning aims to find a common representation to be shared across multiple tasks. The effectiveness of these methods is often limited when the nuances of the tasks' distribution cannot be captured by a single representation. In this work we overcome this issue by inferring a conditioning function, mapping the tasks' side information (such as the tasks' training dataset itself) into a representation tailored to the task at hand. We study environments in which our conditional strategy outperforms standard meta-learning, such as those in which tasks can be organized in separate clusters according to the representation they share. We then propose a meta-algorithm capable of leveraging this advantage in practice. In the unconditional setting, our method yields a new estimator enjoying faster learning rates and requiring less hyper-parameters to tune than current state-of-the-art methods. Our results are supported by preliminary experiments.
翻译:代表学习的标准元学习旨在找到一种共同的代表性,在多种任务之间共享。当任务分布的细微差别无法以单一代表形式体现时,这些方法的效力往往有限。在这项工作中,我们通过推算调节功能,将任务的侧面信息(例如任务培训数据集本身)映射成适合当前任务的表达方式,克服了这一问题。我们研究我们有条件的战略优于标准的元学习的环境,例如任务可以按照它们共享的表达方式分成不同的组群。然后我们提出能够在实践中利用这一优势的元值。在无条件的环境下,我们的方法产生一个新的估算器,享有更快的学习率,要求比目前最先进的方法更低的超参数。我们的结果得到初步试验的支持。