Meta-learning often referred to as learning-to-learn is a promising notion raised to mimic human learning by exploiting the knowledge of prior tasks but being able to adapt quickly to novel tasks. A plethora of models has emerged in this context and improved the learning efficiency, robustness, etc. The question that arises here is can we emulate other aspects of human learning and incorporate them into the existing meta learning algorithms? Inspired by the widely recognized finding in neuroscience that distinct parts of the brain are highly specialized for different types of tasks, we aim to improve the model performance of the current meta learning algorithms by selectively using only parts of the model conditioned on the input tasks. In this work, we describe an approach that investigates task-dependent dynamic neuron selection in deep convolutional neural networks (CNNs) by leveraging the scaling factor in the batch normalization (BN) layer associated with each convolutional layer. The problem is intriguing because the idea of helping different parts of the model to learn from different types of tasks may help us train better filters in CNNs, and improve the model generalization performance. We find that the proposed approach, neural routing in meta learning (NRML), outperforms one of the well-known existing meta learning baselines on few-shot classification tasks on the most widely used benchmark datasets.
翻译:常被称作学习到学习的元学习是一个很有希望的概念,它通过利用先前任务的知识来模仿人类的学习,并能够迅速适应新的任务。在这一背景下出现了大量模型,提高了学习效率、强健性等。这里出现的问题是,我们能否效仿人类学习的其他方面,并将其纳入现有的元学习算法中?由于在神经科学中广泛承认的发现,大脑的不同部分高度专门从事不同类型的任务,我们的目标是通过只使用以投入任务为条件的模型的一部分来选择性地改进当前元学习算法的模型性能。在这项工作中,我们描述一种方法,通过利用与每个进化层相关的分批标准化(BN)层的缩放因素,调查深层进化神经网络中根据任务进行动态神经选择的方法。 这个问题令人感兴趣的原因是,帮助不同部分的大脑从不同任务中学习,可以帮助我们在CNNMS培训更好的过滤器,并改进模型的通用性能。在这项工作中,我们发现,最熟悉的是,在一项基准中,即,最著名的是,在一种基准中,在一种基准中,我们发现,最著名的是,最著名的是,在一种基准中,在一种基准中学习了。