Thanks to their ease of implementation, multilayer perceptrons (MLPs) have become ubiquitous in deep learning applications. The graph underlying an MLP is indeed multipartite, i.e. each layer of neurons only connects to neurons belonging to the adjacent layer. In constrast, in vivo brain connectomes at the level of individual synapses suggest that biological neuronal networks are characterized by scale-free degree distributions or exponentially truncated power law strength distributions, hinting at potentially novel avenues for the exploitation of evolution-derived neuronal networks. In this paper, we present "4Ward", a method and Python library capable of generating flexible and efficient neural networks (NNs) from arbitrarily complex directed acyclic graphs. 4Ward is inspired by layering algorithms drawn from the graph drawing discipline to implement efficient forward passes, and provides significant time gains in computational experiments with various Erd\H{o}s-R\'enyi graphs. 4Ward overcomes the sequential nature of the learning matrix method by parallelizing the computation of activations and provides the designer with freedom to customize weight initialization and activation functions. Our algorithm can be of aid for any investigator seeking to exploit complex topologies in a NN design framework at the microscale.
翻译:多层感知器(MLPs)在深层学习应用中已经变得无处不在。MLP背后的图形确实是一个多面图,即每个神经层只连接到相邻层的神经元。在松散中,个体神经突触水平的体外脑连接器显示,生物神经网络的特征是无规模分布或指数性脱轨电法强度分布,暗示着利用进化衍生神经网络的潜在新途径。在本文中,我们展示了“4Ward”,一个方法和Python图书馆,能够从任意复杂的环流图中产生灵活而高效的神经网络。 4Ward是由从图形绘制学科中提取的分层算法所启发的,目的是实施高效的前方通道,为各种Erd\H{o}s-R\'enyi图表的计算实验提供了巨大的时间收益。 4Ward克服了学习矩阵方法的顺序性。我们通过在启动和升级前方算方法中平行地计算,在设计模型上,可以利用最先进的设计模型来进行升级。