Task-incremental learning involves the challenging problem of learning new tasks continually, without forgetting past knowledge. Many approaches address the problem by expanding the structure of a shared neural network as tasks arrive, but struggle to grow optimally, without losing past knowledge. We present a new framework, Learn to Bind and Grow, which learns a neural architecture for a new task incrementally, either by binding with layers of a similar task or by expanding layers which are more likely to conflict between tasks. Central to our approach is a novel, interpretable, parameterization of the shared, multi-task architecture space, which then enables computing globally optimal architectures using Bayesian optimization. Experiments on continual learning benchmarks show that our framework performs comparably with earlier expansion based approaches and is able to flexibly compute multiple optimal solutions with performance-size trade-offs.
翻译:任务强化式的学习涉及不断学习新任务,同时不忘过去的知识这一具有挑战性的问题。许多方法通过在任务到来时扩大共享神经网络的结构来解决这个问题,但努力以最佳方式发展,同时又不失去过去的知识。 我们提出了一个新的框架,即学习与生俱来,通过将一系列相似的任务捆绑在一起,或通过扩大更可能发生任务冲突的层次,逐步为新任务学习神经结构。我们的方法的核心是共享、多任务建筑空间的新颖的、可解释的参数化,这样就可以利用贝叶西亚优化来计算全球最佳建筑。 持续学习基准实验显示,我们的框架可以与早期的扩展基础方法相对应,并且能够灵活地用业绩大小的权衡来计算多种最佳解决方案。