In a real-world setting biological agents do not have infinite resources to learn new things. It is thus useful to recycle previously acquired knowledge in a way that allows for faster, less resource-intensive acquisition of multiple new skills. Neural networks in the brain are likely not entirely re-trained with new tasks, but how they leverage existing computations to learn new tasks is not well understood. In this work, we study this question in artificial neural networks trained on commonly used neuroscience paradigms. Building on recent work from the multi-task learning literature, we propose two ingredients: (1) network modularity, and (2) learning task primitives. Together, these ingredients form inductive biases we call structural and functional, respectively. Using a corpus of nine different tasks, we show that a modular network endowed with task primitives allows for learning multiple tasks well while keeping parameter counts, and updates, low. We also show that the skills acquired with our approach are more robust to a broad range of perturbations compared to those acquired with other multi-task learning strategies. This work offers a new perspective on achieving efficient multi-task learning in the brain, and makes predictions for novel neuroscience experiments in which targeted perturbations are employed to explore solution spaces.
翻译:在现实世界环境中,生物剂没有无限的资源来学习新事物。 因此,循环利用以前获得的知识,以便更快、较少资源密集地获得多种新技能,是有用的。 大脑神经网络可能不会完全重新接受新任务的再培训,但是它们如何利用现有计算方法来学习新任务却不为人所熟知。 在这项工作中,我们在以常用神经科学范式培训的人工神经网络中研究这个问题。 根据从多任务学习文献中获取的最近工作,我们提出了两个要素:(1) 网络模块化,和(2) 学习原始任务。这些要素共同形成感官偏见,我们分别称为结构和功能。我们使用九种不同任务的组合,我们表明具有任务原始的模块化网络能够很好地学习多重任务,同时保持参数的计数和更新,并且低。我们还表明,我们方法获得的技能比用其他多任务学习战略获得的技能更坚固,更能产生广泛的扰动性。 这项工作为在大脑中实现高效多任务学习的新视角提供了一个新的视角,我们用的新神经科学实验的预测是针对的。