Unified understanding of neuro networks (NNs) gets the users into great trouble because they have been puzzled by what kind of rules should be obeyed to optimize the internal structure of NNs. Considering the potential capability of random graphs to alter how computation is performed, we demonstrate that they can serve as architecture generators to optimize the internal structure of NNs. To transform the random graph theory into an NN model with practical meaning and based on clarifying the input-output relationship of each neuron, we complete data feature mapping by calculating Fourier Random Features (FRFs). Under the usage of this low-operation cost approach, neurons are assigned to several groups of which connection relationships can be regarded as uniform representations of random graphs they belong to, and random arrangement fuses those neurons to establish the pattern matrix, markedly reducing manual participation and computational cost without the fixed and deep architecture. Leveraging this single neuromorphic learning model termed random graph-based neuro network (RGNN) we develop a joint classification mechanism involving information interaction between multiple RGNNs and realize significant performance improvements in supervised learning for three benchmark tasks, whereby they effectively avoid the adverse impact of the interpretability of NNs on the structure design and engineering practice.
翻译:统一理解神经网络(NNs)使用户陷入大麻烦,因为他们对应遵从何种规则来优化NNS的内部结构感到迷惑。考虑到随机图可能改变计算方式,我们证明他们可以发挥建筑生成者的作用,优化NNP的内部结构。将随机图理论转化为NN模型,具有实际意义,并基于澄清每个神经神经神经的输入-输出关系,我们通过计算Fourier随机特性完成数据特征绘图。根据这种低操作成本方法的使用,神经元被分配给若干组,其中的连接关系可被视为他们所属随机图的统一表达方式,随机安排使这些神经元能够建立模式矩阵,明显减少人工参与和计算成本,而没有固定和深层结构。我们利用这个单一的神经形态学习模型,称为随机图表型神经网络(RGNN),我们开发了一个联合分类机制,涉及多个RGNs之间的信息互动,并实现三个基准任务监督学习的显著绩效改进,从而有效避免NN设计结构的负面影响。