Learning quickly and continually is still an ambitious task for neural networks. Indeed, many real-world applications do not reflect the learning setting where neural networks shine, as data are usually few, mostly unlabelled and come as a stream. To narrow this gap, we introduce FUSION - Few-shot UnSupervIsed cONtinual learning - a novel strategy which aims to deal with neural networks that "learn in the wild", simulating a real distribution and flow of unbalanced tasks. We equip FUSION with MEML - Meta-Example Meta-Learning - a new module that simultaneously alleviates catastrophic forgetting and favours the generalisation and future learning of new tasks. To encourage features reuse during the meta-optimisation, our model exploits a single inner loop per task, taking advantage of an aggregated representation achieved through the use of a self-attention mechanism. To further enhance the generalisation capability of MEML, we extend it by adopting a technique that creates various augmented tasks and optimises over the hardest. Experimental results on few-shot learning benchmarks show that our model exceeds the other baselines in both FUSION and fully supervised case. We also explore how it behaves in standard continual learning consistently outperforming state-of-the-art approaches.
翻译:快速和持续学习仍然是神经网络的一项雄心勃勃的任务。 事实上,许多现实世界应用并不反映神经网络发光的学习环境,因为数据通常很少,大多没有标签,而且流成流来。为了缩小这一差距,我们引入了FUSION — — 少发的unsuperved continal 学习 — — 旨在处理“在野生中清除”的神经网络的新战略,模拟了不平衡任务的真正分布和流动。我们用MEML - Mema-Example meta-Lainning — — 一个同时减轻灾难性的遗忘和有利于对新任务进行概括化和今后学习的新模块。为了鼓励在元化优化期间重新利用特征,我们的模式利用每个任务的一个单一的内部环,利用通过使用自省机制实现的总体代表。为了进一步提高MEML的普及能力,我们通过采用一种技术来扩大它的范围,产生各种扩大的任务和最难完成的偏好度。 少发的学习基准的实验结果显示,我们的模型超出了FUSION和全面监督的常规。