Learning from a few examples remains a key challenge in machine learning. Despite recent advances in important domains such as vision and language, the standard supervised deep learning paradigm does not offer a satisfactory solution for learning new concepts rapidly from little data. In this work, we employ ideas from metric learning based on deep neural features and from recent advances that augment neural networks with external memories. Our framework learns a network that maps a small labelled support set and an unlabelled example to its label, obviating the need for fine-tuning to adapt to new class types. We then define one-shot learning problems on vision (using Omniglot, ImageNet) and language tasks. Our algorithm improves one-shot accuracy on ImageNet from 87.6% to 93.2% and from 88.0% to 93.8% on Omniglot compared to competing approaches. We also demonstrate the usefulness of the same model on language modeling by introducing a one-shot task on the Penn Treebank.
翻译:从几个实例中学习仍然是机器学习的关键挑战。尽管在视觉和语言等重要领域最近有所进步,但标准监督的深层次学习模式并不能为快速从微小数据中学习新概念提供令人满意的解决方案。 在这项工作中,我们利用基于深层神经特征的衡量学习理念,以及借助外部记忆增强神经网络的最新进步。我们的框架学习了一个绘制小标签支持集的网络和标签上未贴标签的范例,从而不必微调以适应新类类型。我们随后界定了关于视觉(使用Omniglot,图像Net)和语言任务的一手学习问题。我们的算法提高了图像网络的一手精确度,从87.6%提高到93.2%,从88.0%提高到93.8%,而与竞争的方法相比,Omniglot的一手精确度从88.0%提高到93.8%。 我们还通过在Penn Treebank推出一手任务,展示了同一语言建模模式的效用。