We aim to bridge the gap between our common-sense few-sample human learning and large-data machine learning. We derive a theory of human-like few-shot learning from von-Neuman-Landauer's principle. modelling human learning is difficult as how people learn varies from one to another. Under commonly accepted definitions, we prove that all human or animal few-shot learning, and major models including Free Energy Principle and Bayesian Program Learning that model such learning, approximate our theory, under Church-Turing thesis. We find that deep generative model like variational autoencoder (VAE) can be used to approximate our theory and perform significantly better than baseline models including deep neural networks, for image recognition, low resource language processing, and character recognition.
翻译:我们的目标是弥合我们常识、少见的人类学习和大数据机器学习之间的差距。我们从 von-Neuman-Landauer 的原则中得出人类少见的学习理论。建模人类学习是困难的,因为人们如何相互学习。根据共同接受的定义,我们证明所有人类或动物少见的学习,以及包括自由能源原则和巴耶斯方案学习等主要模型,这些模型模拟了这种学习,近似了我们的理论,在教会指导理论下。我们发现,像变异自动电解码(VAE)这样的深层次基因化模型可以用来概括我们的理论,并且比深层神经网络等基线模型表现得更好,用于图像识别、低资源语言处理和特征识别。