Deep neural networks (DNNs) are known to perform well when deployed to test distributions that shares high similarity with the training distribution. Feeding DNNs with new data sequentially that were unseen in the training distribution has two major challenges -- fast adaptation to new tasks and catastrophic forgetting of old tasks. Such difficulties paved way for the on-going research on few-shot learning and continual learning. To tackle these problems, we introduce Attentive Independent Mechanisms (AIM). We incorporate the idea of learning using fast and slow weights in conjunction with the decoupling of the feature extraction and higher-order conceptual learning of a DNN. AIM is designed for higher-order conceptual learning, modeled by a mixture of experts that compete to learn independent concepts to solve a new task. AIM is a modular component that can be inserted into existing deep learning frameworks. We demonstrate its capability for few-shot learning by adding it to SIB and trained on MiniImageNet and CIFAR-FS, showing significant improvement. AIM is also applied to ANML and OML trained on Omniglot, CIFAR-100 and MiniImageNet to demonstrate its capability in continual learning. Code made publicly available at https://github.com/huang50213/AIM-Fewshot-Continual.
翻译:深心神经网络(DNNS)在被部署用于测试与培训分布高度相似的分布时表现良好; 向DNNS提供在培训分布中未见的相继新数据有两大挑战 -- -- 快速适应新任务和灾难性地忘记旧任务; 这些困难为正在进行的关于少见学习和持续学习的研究铺平了道路; 为了解决这些问题, 我们引入了“ 强化独立机制 ” ( AIM) 。 我们结合DNN的特征提取和更高层次概念学习脱钩,纳入了使用快速和缓慢重量的学习理念。 AIM是为更高层次的概念学习设计的,由各种专家混合设计,这些专家竞相学习独立概念以完成新任务。 AIM是一个模块部分,可以插入现有的深层次学习框架。 我们通过将它加入SIBIB并在MiniImageNet和CIFAR-FSFSA中受过培训,显示出显著的改进。 AIM还被应用于在Omniglot、CIFAR-100和MiniFUM-MQARCAWA/CLAWAL上进行公开学习的能力。