In this paper, we explore an alternate method for synthesizing neural network architectures, inspired by the brain's stochastic synaptic pruning. During a person's lifetime, numerous distinct neuronal architectures are responsible for performing the same tasks. This indicates that biological neural networks are, to some degree, architecture agnostic. However, artificial networks rely on their fine-tuned weights and hand-crafted architectures for their remarkable performance. This contrast begs the question: Can we build artificial architecture agnostic neural networks? To ground this study we utilize sparse, binary neural networks that parallel the brain's circuits. Within this sparse, binary paradigm we sample many binary architectures to create families of architecture agnostic neural networks not trained via backpropagation. These high-performing network families share the same sparsity, distribution of binary weights, and succeed in both static and dynamic tasks. In summation, we create an architecture manifold search procedure to discover families or architecture agnostic neural networks.
翻译:在本文中, 我们探索了一种合成神经网络结构的替代方法, 由大脑的随机合成合成合成神经网络结构所启发。 在一个人的一生中, 无数不同的神经结构要负责完成同样的任务。 这表示生物神经网络在某种程度上是建筑的不可知性。 然而, 人工网络依靠它们精细调整的重量和手工艺的建筑来取得显著的性能。 这个对比引出了这样一个问题 : 我们能否在大脑的随机神经网络中建立人工建筑结构? 为了进行这项研究, 我们利用与大脑电路平行的稀有的二进制神经网络。 在这个稀有的二进制模式中, 我们采样了许多二进制结构结构来创建建筑的神经网络。 这些高性网络家庭拥有相同的宽度, 分配二进制的重量, 并在静态和动态任务中都成功。 在总结中, 我们创建一个建筑的多重搜索程序来发现家庭或结构神经网络。