Artificial neural networks took a lot of inspiration from their biological counterparts in becoming our best machine perceptual systems. This work summarizes some of that history and incorporates modern theoretical neuroscience into experiments with artificial neural networks from the field of deep learning. Specifically, iterative magnitude pruning is used to train sparsely connected networks with 33x fewer weights without loss in performance. These are used to test and ultimately reject the hypothesis that weight sparsity alone improves image noise robustness. Recent work mitigated catastrophic forgetting using weight sparsity, activation sparsity, and active dendrite modeling. This paper replicates those findings, and extends the method to train convolutional neural networks on a more challenging continual learning task. The code has been made publicly available.
翻译:人工神经网络在成为我们最好的机器感知系统时,从生物学对等体中汲取了许多灵感。 这项工作总结了其中的一些历史,并将现代理论神经科学纳入深造领域人工神经网络的实验中。 具体地说,迭代规模的修剪用于培训连接极少的网络,其重量减少33xx,且不造成性能损失。 这些修剪用于测试和最终否定光是体重宽度就能改善图像噪音稳健度的假设。 最近的工作通过减重、激活压力和积极的脱衣模型等方法减轻了灾难性的遗忘。 本文复制了这些发现,并扩展了在更具有挑战性的不断学习任务上培训进化神经网络的方法。 该代码已经公开发布。