Deep neural networks (DNNs) are powerful tools in learning sophisticated but fixed mapping rules between inputs and outputs, thereby limiting their application in more complex and dynamic situations in which the mapping rules are not kept the same but changing according to different contexts. To lift such limits, we developed a novel approach involving a learning algorithm, called orthogonal weights modification (OWM), with the addition of a context-dependent processing (CDP) module. We demonstrated that with OWM to overcome the problem of catastrophic forgetting, and the CDP module to learn how to reuse a feature representation and a classifier for different contexts, a single network can acquire numerous context-dependent mapping rules in an online and continual manner, with as few as $\sim$10 samples to learn each. This should enable highly compact systems to gradually learn myriad regularities of the real world and eventually behave appropriately within it.
翻译:深神经网络(DNN)是学习投入和产出之间复杂而固定的绘图规则的有力工具,从而限制在更复杂和动态的情况下应用这些规则,在这些情况下,制图规则不保持相同,而是根据不同的情况变化。为了取消这种限制,我们开发了一种新颖的方法,涉及学习算法,称为正方位权重修改(OWM),并增加了一个基于背景的处理模块。我们证明,与OWM一道,通过OWM来克服灾难性的遗忘问题,以及CDP模块学会如何在不同的环境下重新利用地物代表和一个分类器,一个单一的网络可以以在线和持续的方式获得许多基于背景的绘图规则,每个系统只有10美元的样本可以学习。这应该能够让高度紧凑的系统逐渐学习现实世界的众多规律,并最终在其中适当运行。