Learning a deep model from small data is yet an opening and challenging problem. We focus on one-shot classification by deep learning approach based on a small quantity of training samples. We proposed a novel deep learning approach named Local Contrast Learning (LCL) based on the key insight about a human cognitive behavior that human recognizes the objects in a specific context by contrasting the objects in the context or in her/his memory. LCL is used to train a deep model that can contrast the recognizing sample with a couple of contrastive samples randomly drawn and shuffled. On one-shot classification task on Omniglot, the deep model based LCL with 122 layers and 1.94 millions of parameters, which was trained on a tiny dataset with only 60 classes and 20 samples per class, achieved the accuracy 97.99% that outperforms human and state-of-the-art established by Bayesian Program Learning (BPL) trained on 964 classes. LCL is a fundamental idea which can be applied to alleviate parametric model's overfitting resulted by lack of training samples.
翻译:从小数据中深层模型的学习仍是一个开放和具有挑战性的问题。 我们根据少量培训样本,通过深层次的深层次学习方法侧重于一发分解。 我们建议了一种新的深层次学习方法,名为地方对比学习(LLL),其依据是对人类认知行为的关键认识,即人类在特定背景下通过对比上下文或记忆中的对象而认识物体。 LLCL用于培养深层次模型,该模型可以与一些随机抽取和冲洗的对比样本进行对比。 Omniglot的一发分解任务,即以122层和194万个参数为基础的深层LCLL(LCL),该深层模型经过每班只有60个班和20个样本的微小数据集的培训,实现了精确度97.99%,超过了Bayesian方案学习(BPL)在964类培训中确立的人类和最新工艺水平。 LCLCL是一个基本想法,可用于缓解因缺乏培训样本而导致的对等模型的过度适应。