Developing algorithms that are able to generalize to a novel task given only a few labeled examples represents a fundamental challenge in closing the gap between machine- and human-level performance. The core of human cognition lies in the structured, reusable concepts that help us to rapidly adapt to new tasks and provide reasoning behind our decisions. However, existing meta-learning methods learn complex representations across prior labeled tasks without imposing any structure on the learned representations. Here we propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions. Instead of learning a joint unstructured metric space, COMET learns mappings of high-level concepts into semi-structured metric spaces, and effectively combines the outputs of independent concept learners. We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation on a novel dataset from a biological domain developed in our work. COMET significantly outperforms strong meta-learning baselines, achieving 6-15% relative improvement on the most challenging 1-shot learning tasks, while unlike existing methods providing interpretations behind the model's predictions.
翻译:只有在几个有标签的例子中,能够概括到新任务的发展中算法是缩小机器和人类水平业绩之间差距的根本挑战。人类认知的核心在于结构化和可重复使用的概念,这些概念有助于我们迅速适应新的任务,并为我们的决定提供推理依据。然而,现有的元学习方法在没有将任何结构强加于所学的表示方式的情况下,在先前的标签任务中学习了复杂的表述方式。我们在这里建议了“知识与技术”这一元学习方法,它通过学习如何按照人类可解释的概念维度来提高一般化能力。与其学习一个非结构化的通用衡量空间,不如将高级概念映射成半结构化的计量空间,并有效地将独立概念学习者的产出结合起来。我们评估了我们从不同领域完成的少数任务的模式,包括细微的图像分类、文件分类和细胞类型说明,它们来自我们工作中开发的生物领域的新数据集。 知识与现有的方法不同,它明显超越了强大的元学习基线,在最具有挑战性的1号的学习任务上实现了6-15%的相对改进。