This paper takes a parallel learning approach for robust and transparent AI. A deep neural network is trained in parallel on multiple tasks, where each task is trained only on a subset of the network resources. Each subset consists of network segments, that can be combined and shared across specific tasks. Tasks can share resources with other tasks, while having independent task-related network resources. Therefore, the trained network can share similar representations across various tasks, while also enabling independent task-related representations. The above allows for some crucial outcomes. (1) The parallel nature of our approach negates the issue of catastrophic forgetting. (2) The sharing of segments uses network resources more efficiently. (3) We show that the network does indeed use learned knowledge from some tasks in other tasks, through shared representations. (4) Through examination of individual task-related and shared representations, the model offers transparency in the network and in the relationships across tasks in a multi-task setting. Evaluation of the proposed approach against complex competing approaches such as Continual Learning, Neural Architecture Search, and Multi-task learning shows that it is capable of learning robust representations. This is the first effort to train a DL model on multiple tasks in parallel. Our code is available at https://github.com/MahsaPaknezhad/PaRT
翻译:本文采取平行的学习方法,以建立稳健和透明的AI。一个深层次的神经网络在多个任务上同时接受培训,每个任务只对网络资源的一个子组进行培训。每个子组由网络部分组成,可以合并和分担具体任务;任务可以与其他任务共享资源,同时拥有与任务相关的独立网络资源。因此,经过培训的网络可以在不同任务中分担类似的代表,同时也能够使与任务相关的独立陈述成为可能。上述做法可以产生一些关键的结果。 (1)我们的方法的平行性质否定了灾难性的遗忘问题。 (2)部分共享更有效地使用网络资源。 (3)我们表明,通过共同陈述,网络确实在其他任务中利用从某些任务中学到的知识。 (4) 通过审查与任务有关和共享的表述,模型在网络中以及在多任务环境中的任务关系中提供了透明度。对拟议方法的评价表明,它能够学习强有力的陈述。这是首次努力,通过平行方式培训DL模式的多项任务。我们的代码可在 http://gius/MambQnez。