Artificial neural networks can be represented by paths. Generated as random walks on a dense network graph, we find that the resulting sparse networks allow for deterministic initialization and even weights with fixed sign. Such networks can be trained sparse from scratch, avoiding the expensive procedure of training a dense network and compressing it afterwards. Although sparse, weights are accessed as contiguous blocks of memory. In addition, enumerating the paths using deterministic low discrepancy sequences, for example the Sobol' sequence, amounts to connecting the layers of neural units by progressive permutations, which naturally avoids bank conflicts in parallel computer hardware. We demonstrate that the artificial neural networks generated by low discrepancy sequences can achieve an accuracy within reach of their dense counterparts at a much lower computational complexity.
翻译:人造神经网络可以通过路径来代表。 在密集网络图上以随机行走的方式生成,我们发现由此形成的稀疏网络允许确定性初始化,甚至用固定标志来计算重量。 这种网络可以从零开始训练,避免培训密集网络的昂贵程序,然后压缩它。虽然这些网络很稀少,但作为内存的毗连区块使用重量。此外,使用确定性低差异序列(例如Sobol的序列)来计算路径,相当于通过渐进式调整将神经单元的层连接起来,这自然避免了平行计算机硬件的银行冲突。我们证明,低差异序列产生的人工神经网络可以在更低的计算复杂性下达到其密度对等的精确度。